| Summary | Count | Pct |
| Total | 45138535 | 100.0% |
| Stalls | 43501742 | 96.4% |
| Commits | 1636793 | 3.6% |
| Stall Types | Count | Pct |
| TOFF_CYCLES | 33853902 | 75.0% |
| CU_VFIFO_FULL_CYCLES | 5717339 | 12.7% |
| DCACHE_DEMAND_LOAD_MISS_CYCLES | 2420875 | 5.4% |
| CU_NOB2B_EXTENSION_CYCLES | 577485 | 1.3% |
| CU_INTERLOCK_CYCLES | 397340 | 0.9% |
| FE_MISPREDICT_TIME_CYCLES | 120763 | 0.3% |
| POST_REPLAY_B2B_BLOCK | 97776 | 0.2% |
| ICACHE_DEMAND_MISS_CYCLES | 97741 | 0.2% |
| INDIRECT_JUMP_CYCLES | 45740 | 0.1% |
| DU_BACKPRESSURE_CYCLES | 26847 | 0.1% |
| ENDLOOP_CYCLES | 22994 | 0.1% |
| CU_BE_NOBSB_CYCLES | 18550 | 0.0% |
| IU_HIT_STOP_ENDLOOP_TAKEN | 17489 | 0.0% |
| IU_HIT_STOP_BTB_HIT | 15912 | 0.0% |
| CU_RXX_INTERLOCK_CYCLES | 11816 | 0.0% |
| CU_QOS_NODISPATCH_CYCLES | 10657 | 0.0% |
| DU_FILL_CONFLICT_CYCLES | 9762 | 0.0% |
| CU_DU_XU_NO_FWD_CYCLES | 9067 | 0.0% |
| CU_BE_NOB2B_CYCLES | 6696 | 0.0% |
| CU_PREG_INTERLOCK_CYCLES | 6277 | 0.0% |
| IU_HIT_STOP_PARTIAL | 4681 | 0.0% |
| DUNCACHED_DEMAND_MISS_CYCLES | 2114 | 0.0% |
| IU_HIT_STOP_RETURN | 1749 | 0.0% |
| CU_EARLY_WRITE_CYCLES | 1738 | 0.0% |
| IU_FETCH_CROSS_CYCLES | 1484 | 0.0% |
| IU_HIT_STOP_BTB_MISS | 1481 | 0.0% |
| FE_ISYNC | 1476 | 0.0% |
| CU_DUAL_WRITE_INTERLOCK_CYCLES | 1103 | 0.0% |
| IU_HIT_STOP_EOL | 509 | 0.0% |
| SYNCHT_CYCLES | 157 | 0.0% |
| CU_FP_RX_NO_NTWK_CYCLES | 74 | 0.0% |
| FE_PICK_L2FILL | 64 | 0.0% |
| DU_UTLB_MISS_CYCLES | 54 | 0.0% |
| FE_TAG_CONFLICT | 8 | 0.0% |
| IU_UTLB_MISS_CYCLES | 8 | 0.0% |
| DU_BANK_CONFLICT_CYCLES | 6 | 0.0% |
| CU_WRITE_REG_BLOCK_CYCLES | 4 | 0.0% |
| CU_CREG_INTERLOCK_CYCLES | 2 | 0.0% |
| CU_WRITE_PORT_BLOCK_CYCLES | 2 | 0.0% |
| Top Packets | Top Functions |
| PMU Events | Count |
| CYCLES_1_THREAD_RUNNING | 11284633 |
| COMMITTED_PKT_ANY | 1636789 |
| COMMITTED_PKT_T0 | 1636789 |
| COMMITTED_PKT_1_THREAD_RUNNING | 1636788 |
| COMMITTED_PKT_BSB | 1013775 |
| L2_ACCESS | 989332 |
| L2_ACCESS_EVEN | 480082 |
| COMMITTED_PKT_B2B | 473651 |
| L2_PIPE_CONFLICT | 285755 |
| L2_TAG_ARRAY_CONFLICT | 185696 |
| AXI_WRITE_REQUEST | 137491 |
| AXI_LINE64_WRITE_REQUEST | 137488 |
| DU_WRITE_TO_L2 | 127677 |
| L2_DU_STORE_ACCESS | 127677 |
| AXI_READ_REQUEST | 115893 |
| AXI_LINE64_READ_REQUEST | 115880 |
| HVX_L2_LOAD_MISS | 97259 |
| HVX_L2_STORE_MISS | 97124 |
| DCACHE_STORE_MISS | 79940 |
| L2_DU_STORE_COALESCE | 79288 |
| DU_STORE_BUFFER_ACCESS | 37217 |
| L2_DU_STORE_MISS | 37135 |
| HVX_L2_LOAD_SECONDARY_MISS | 26682 |
| HVX_REG_ORDER | 24299 |
| DCACHE_DEMAND_MISS | 14158 |
| DU_READ_TO_L2 | 13093 |
| L2_DU_READ_ACCESS | 13080 |
| ANY_DU_REPLAY | 12205 |
| ANY_DU_STALL | 12019 |
| L2_FIFO_FULL_REPLAY | 8949 |
| REDIRECT_MISC | 8895 |
| L2_DU_READ_MISS | 8473 |
| REDIRECT_LOOP_MISPREDICT | 7011 |
| L2_DU_LOAD_SECONDARY_MISS | 4588 |
| DU_FILL_REPLAY | 3254 |
| DU_DEMAND_SECONDARY_MISS | 2158 |
| L2_IU_ACCESS | 1194 |
| L2_IU_MISS | 1119 |
| DCFETCH_MISS | 1080 |
| L2_DU_PREFETCH_ACCESS | 1080 |
| L2_DU_PREFETCH_MISS | 1080 |
| REDIRECT_BIMODAL_MISPREDICT | 659 |
| ICACHE_DEMAND_MISS | 480 |
| L2_IU_PREFETCH_ACCESS | 345 |
| L2_IU_PREFETCH_MISS | 327 |
| DCFETCH_HIT | 190 |
| IU_DEMAND_SECONDARY_MISS | 55 |
| REDIRECT_TARGET_MISPREDICT | 52 |
| L2_STORE_LINK | 18 |
| DU_LOAD_UNCACHEABLE | 13 |
| HVX_PKT_THREAD | 7 |
| DTLB_MISS | 6 |
| DU_STORE_UNCACHEABLE | 3 |
| DU_BANK_CONFLICT_REPLAY | 2 |
| ITLB_MISS | 1 |
| Top Packets | Top Functions |
| Summary | Count | Pct |
| Total | 3530432 | 100.0% |
| Stalls | 2874224 | 81.4% |
| Commits | 656208 | 18.6% |
| Stall Types | Count | Pct |
| HVX_LD_L2_OUTSTANDING | 2855977 | 80.9% |
| HVX_REG_ORDER | 18224 | 0.5% |
| HVX_ST_L2_OUTSTANDING | 23 | 0.0% |
| Top Packets | Top Functions |
| Index | Name | Count | Count Per-packet |
|---|---|---|---|
| 0x1 | COUNTER0_OVERFLOW | 0 | 0.0 |
| 0x2 | COUNTER2_OVERFLOW | 0 | 0.0 |
| 0x3 | COMMITTED_PKT_ANY | 458017 | 1.0 |
| 0x4 | COMMITTED_PKT_BSB | 277634 | 0.606165 |
| 0x5 | COUNTER4_OVERFLOW | 0 | 0.0 |
| 0x6 | COUNTER6_OVERFLOW | 0 | 0.0 |
| 0x7 | COMMITTED_PKT_B2B | 126874 | 0.277007 |
| 0x8 | COMMITTED_PKT_SMT | 0 | 0.0 |
| 0x9 | IU_CREDIT_FAIL | 0 | 0.0 |
| 0xa | CYCLES_5_THREAD_RUNNING | 0 | 0.0 |
| 0xb | CYCLES_6_THREAD_RUNNING | 0 | 0.0 |
| 0xc | COMMITTED_PKT_T0 | 458017 | 1.0 |
| 0xd | COMMITTED_PKT_T1 | 0 | 0.0 |
| 0xe | COMMITTED_PKT_T2 | 0 | 0.0 |
| 0xf | COMMITTED_PKT_T3 | 0 | 0.0 |
| 0x12 | ICACHE_DEMAND_MISS | 163 | 0.000356 |
| 0x13 | DCACHE_DEMAND_MISS | 3273 | 0.007146 |
| 0x14 | DCACHE_STORE_MISS | 154 | 0.000336 |
| 0x17 | CU_PKT_READY_NOT_DISPATCHED | 0 | 0.0 |
| 0x1c | IU_L1S_ACCESS | 0 | 0.0 |
| 0x1d | IU_L1S_PREFETCH | 0 | 0.0 |
| 0x1e | IU_L1S_AXIS_STALL | 0 | 0.0 |
| 0x1f | IU_L1S_NO_GRANT | 0 | 0.0 |
| 0x20 | ANY_IU_REPLAY | 1802 | 0.003934 |
| 0x21 | ANY_DU_REPLAY | 3061 | 0.006683 |
| 0x23 | ISSUED_PACKETS | 477568 | 1.042686 |
| 0x24 | LOOPCACHE_PACKETS | 0 | 0.0 |
| 0x25 | COMMITTED_PKT_1_THREAD_RUNNING | 458016 | 0.999998 |
| 0x26 | COMMITTED_PKT_2_THREAD_RUNNING | 0 | 0.0 |
| 0x27 | COMMITTED_PKT_3_THREAD_RUNNING | 0 | 0.0 |
| 0x2a | COMMITTED_INSTS | 867565 | 1.894176 |
| 0x2b | COMMITTED_TC1_INSTS | 691068 | 1.508826 |
| 0x2c | COMMITTED_PRIVATE_INSTS | 692535 | 1.512029 |
| 0x2f | COMMITTED_PKT_4_THREAD_RUNNING | 0 | 0.0 |
| 0x30 | COMMITTED_LOADS | 35532 | 0.077578 |
| 0x31 | COMMITTED_STORES | 17350 | 0.037881 |
| 0x32 | COMMITTED_MEMOPS | 96 | 0.00021 |
| 0x37 | COMMITTED_PROGRAM_FLOW_INSTS | 103825 | 0.226684 |
| 0x38 | COMMITTED_PKT_CHANGED_FLOW | 93466 | 0.204067 |
| 0x39 | COMMITTED_PKT_ENDLOOP | 71046 | 0.155117 |
| 0x3b | CYCLES_1_THREAD_RUNNING | 3648959 | 7.966864 |
| 0x3c | CYCLES_2_THREAD_RUNNING | 0 | 0.0 |
| 0x3d | CYCLES_3_THREAD_RUNNING | 0 | 0.0 |
| 0x3e | CYCLES_4_THREAD_RUNNING | 0 | 0.0 |
| 0x3f | AXI_LINE128_READ_REQUEST | 0 | 0.0 |
| 0x40 | AXI_READ_REQUEST | 52236 | 0.114048 |
| 0x41 | AXI_LINE32_READ_REQUEST | 0 | 0.0 |
| 0x42 | AXI_WRITE_REQUEST | 57812 | 0.126222 |
| 0x43 | AXI_LINE32_WRITE_REQUEST | 0 | 0.0 |
| 0x44 | AHB_READ_REQUEST | 0 | 0.0 |
| 0x45 | AHB_WRITE_REQUEST | 0 | 0.0 |
| 0x46 | AXI_LINE128_WRITE_REQUEST | 0 | 0.0 |
| 0x47 | AXI_SLAVE_MULTI_BEAT_ACCESS | 0 | 0.0 |
| 0x48 | AXI_SLAVE_SINGLE_BEAT_ACCESS | 0 | 0.0 |
| 0x49 | AXI2_READ_REQUEST | 0 | 0.0 |
| 0x4a | AXI2_LINE32_READ_REQUEST | 0 | 0.0 |
| 0x4b | AXI2_WRITE_REQUEST | 0 | 0.0 |
| 0x4c | AXI2_LINE32_WRITE_REQUEST | 0 | 0.0 |
| 0x4d | AXI2_CONGESTION | 0 | 0.0 |
| 0x4e | DMA_SLAVE_MULTI_BEAT_ACCESS | 0 | 0.0 |
| 0x4f | DMA_SLAVE_SINGLE_BEAT_ACCESS | 0 | 0.0 |
| 0x50 | COMMITTED_FPS | 156 | 0.000341 |
| 0x51 | REDIRECT_BIMODAL_MISPREDICT | 201 | 0.000439 |
| 0x52 | REDIRECT_TARGET_MISPREDICT | 28 | 6.1e-05 |
| 0x53 | REDIRECT_LOOP_MISPREDICT | 1363 | 0.002976 |
| 0x54 | REDIRECT_MISC | 2291 | 0.005002 |
| 0x56 | NUM_PACKET_CRACKED | 0 | 0.0 |
| 0x58 | JTLB_MISS | 0 | 0.0 |
| 0x5a | COMMITTED_PKT_RETURN | 5828 | 0.012724 |
| 0x5b | COMMITTED_PKT_INDIRECT_JUMP | 110 | 0.00024 |
| 0x5c | COMMITTED_BIMODAL_BRANCH_INSTS | 13230 | 0.028885 |
| 0x5d | BRANCH_QUEUE_FULL | 0 | 0.0 |
| 0x5e | DU_REQUESTED_BUBBLE_INSERTED | 0 | 0.0 |
| 0x5f | VTCM_SCALAR_FIFO_FULL_CYCLES | 0 | 0.0 |
| 0x61 | DU_L1S_LOAD_ACCESS | 0 | 0.0 |
| 0x62 | ICACHE_ACCESS | 265860 | 0.580459 |
| 0x63 | BTB_HIT | 16544 | 0.036121 |
| 0x64 | BTB_MISS | 743 | 0.001622 |
| 0x65 | IU_DEMAND_SECONDARY_MISS | 20 | 4.4e-05 |
| 0x66 | IU_LINE_FROM_HWLOOP | 0 | 0.0 |
| 0x67 | FAST_FETCH_KILLED | 3889 | 0.008491 |
| 0x68 | IU_1_PKT_AVAILABLE_TO_ISSUE | 71400 | 0.155889 |
| 0x69 | FETCHED_PACKETS_DROPPED | 0 | 0.0 |
| 0x6a | IU_REQUESTS_TO_L2_REPLAYED | 0 | 0.0 |
| 0x6b | IU_PREFETCHES_SENT_TO_L2 | 107 | 0.000234 |
| 0x6c | ITLB_MISS | 0 | 0.0 |
| 0x6d | IU_2_PKT_AVAILABLE_TO_ISSUE | 171671 | 0.374814 |
| 0x6e | IU_3_PKT_AVAILABLE_TO_ISSUE | 20942 | 0.045723 |
| 0x6f | IU_REQUEST_STALLED | 0 | 0.0 |
| 0x70 | IU_BIMODAL_L2_ELIGIBLE | 102 | 0.000223 |
| 0x71 | IU_0_PKT_AVAILABLE_TO_ISSUE | 1802 | 0.003934 |
| 0x72 | FETCH_2_CYCLE | 71992 | 0.157182 |
| 0x73 | FETCH_3_CYCLE | 80128 | 0.174945 |
| 0x74 | IU_PREFETCHES_DROPPED | 1 | 2e-06 |
| 0x75 | L2_IU_SECONDARY_MISS | 0 | 0.0 |
| 0x76 | L2_IU_ACCESS | 250 | 0.000546 |
| 0x77 | L2_IU_MISS | 224 | 0.000489 |
| 0x78 | L2_IU_PREFETCH_ACCESS | 107 | 0.000234 |
| 0x79 | L2_IU_PREFETCH_MISS | 97 | 0.000212 |
| 0x7a | L2_IU_BRANCH_CACHE_WRITE_REQUEST | 102 | 0.000223 |
| 0x7b | L2_IU_BRANCH_CACHE_WRITE | 102 | 0.000223 |
| 0x7c | L2_DU_READ_ACCESS | 3273 | 0.007146 |
| 0x7d | L2_DU_READ_MISS | 2064 | 0.004506 |
| 0x7e | L2FETCH_ACCESS | 0 | 0.0 |
| 0x7f | L2FETCH_MISS | 0 | 0.0 |
| 0x80 | L2_AXI_INTERLEAVE_DROP | 0 | 0.0 |
| 0x81 | L2_ACCESS | 356933 | 0.779301 |
| 0x82 | L2_PIPE_CONFLICT | 144564 | 0.31563 |
| 0x83 | L2_TAG_ARRAY_CONFLICT | 79066 | 0.172627 |
| 0x84 | AXI_RD_CONGESTION | 56403 | 0.123146 |
| 0x85 | AHB_CONGESTION | 0 | 0.0 |
| 0x86 | SNOOP_BLOCK | 0 | 0.0 |
| 0x87 | TCM_DU_ACCESS | 0 | 0.0 |
| 0x88 | TCM_DU_READ_ACCESS | 0 | 0.0 |
| 0x89 | TCM_IU_ACCESS | 0 | 0.0 |
| 0x8a | L2_CASTOUT | 53178 | 0.116105 |
| 0x8b | L2_DU_STORE_ACCESS | 17087 | 0.037306 |
| 0x8c | L2_DU_STORE_MISS | 76 | 0.000166 |
| 0x8d | L2_DU_PREFETCH_ACCESS | 0 | 0.0 |
| 0x8e | L2_DU_PREFETCH_MISS | 0 | 0.0 |
| 0x8f | L2_DU_RETURN_NOT_ACKED | 0 | 0.0 |
| 0x90 | L2_DU_LOAD_SECONDARY_MISS | 1206 | 0.002633 |
| 0x91 | L2FETCH_COMMAND | 0 | 0.0 |
| 0x92 | L2FETCH_COMMAND_KILLED | 0 | 0.0 |
| 0x93 | L2FETCH_COMMAND_OVERWRITE | 0 | 0.0 |
| 0x94 | L2FETCH_ACCESS_CREDIT_FAIL | 0 | 0.0 |
| 0x95 | AXI_SLAVE_READ_BUSY | 0 | 0.0 |
| 0x96 | AXI_SLAVE_WRITE_BUSY | 0 | 0.0 |
| 0x97 | L2_ACCESS_EVEN | 188056 | 0.410587 |
| 0x98 | CLADE_HIGH_PRIO_L2_ACCESS | 0 | 0.0 |
| 0x99 | CLADE_LOW_PRIO_L2_ACCESS | 0 | 0.0 |
| 0x9a | CLADE_HIGH_PRIO_L2_MISS | 0 | 0.0 |
| 0x9b | CLADE_LOW_PRIO_L2_MISS | 0 | 0.0 |
| 0x9c | CLADE_HIGH_PRIO_EXCEPTION | 0 | 0.0 |
| 0x9d | CLADE_LOW_PRIO_EXCEPTION | 0 | 0.0 |
| 0x9e | AXI2_SLAVE_READ_BUSY | 0 | 0.0 |
| 0x9f | AXI2_SLAVE_WRITE_BUSY | 0 | 0.0 |
| 0xa0 | ANY_DU_STALL | 3273 | 0.007146 |
| 0xa1 | DU_BANK_CONFLICT_REPLAY | 2 | 4e-06 |
| 0xa2 | DU_CREDIT_REPLAY | 0 | 0.0 |
| 0xa3 | L2_FIFO_FULL_REPLAY | 1972 | 0.004306 |
| 0xa4 | DU_STORE_BUFFER_FULL_REPLAY | 0 | 0.0 |
| 0xa5 | DU_STORE_BUFFER_FORCED_DRAIN | 0 | 0.0 |
| 0xa6 | DU_SNOOP_CONFLICT_REPLAY | 0 | 0.0 |
| 0xa7 | DU_SNOOP_REQUEST | 0 | 0.0 |
| 0xa8 | DU_FILL_REPLAY | 1087 | 0.002373 |
| 0xa9 | DU_SECMISS_REPLAY | 0 | 0.0 |
| 0xaa | DU_SNOOP_REQUEST_CLEAN_HIT | 0 | 0.0 |
| 0xab | DU_EVICTIONS_SENT_TO_L2 | 0 | 0.0 |
| 0xac | DU_READ_TO_L2 | 3273 | 0.007146 |
| 0xad | DU_WRITE_TO_L2 | 17087 | 0.037306 |
| 0xaf | DCZERO_COMMITTED | 0 | 0.0 |
| 0xb3 | DTLB_MISS | 0 | 0.0 |
| 0xb5 | DU_STORE_BUFFER_ACCESS | 13292 | 0.029021 |
| 0xb6 | STORE_BUFFER_HIT_REPLAY | 0 | 0.0 |
| 0xb7 | STORE_BUFFER_FORCE_REPLAY | 0 | 0.0 |
| 0xb8 | TAG_WRITE_CONFLICT_REPLAY | 0 | 0.0 |
| 0xb9 | SMT_BANK_CONFLICT | 0 | 0.0 |
| 0xba | PORT_CONFLICT_REPLAY | 0 | 0.0 |
| 0xbd | PAGE_CROSS_REPLAY | 0 | 0.0 |
| 0xbe | DU_DEALLOC_SECURITY_REPLAY | 0 | 0.0 |
| 0xbf | DU_DEMAND_SECONDARY_MISS | 0 | 0.0 |
| 0xc0 | DU_MISC_REPLAY | 0 | 0.0 |
| 0xc3 | DCFETCH_COMMITTED | 101 | 0.000221 |
| 0xc4 | DCFETCH_HIT | 101 | 0.000221 |
| 0xc5 | DCFETCH_MISS | 0 | 0.0 |
| 0xc8 | DU_LOAD_UNCACHEABLE | 0 | 0.0 |
| 0xc9 | DU_DUAL_LOAD_UNCACHEABLE | 0 | 0.0 |
| 0xca | DU_STORE_UNCACHEABLE | 0 | 0.0 |
| 0xcc | MISS_TO_PREFETCH | 0 | 0.0 |
| 0xce | AXI_LINE64_READ_REQUEST | 52236 | 0.114048 |
| 0xcf | AXI_LINE64_WRITE_REQUEST | 57812 | 0.126222 |
| 0xd0 | AXI_WR_CONGESTION | 119188 | 0.260226 |
| 0xd1 | AHB_8_READ_REQUEST | 0 | 0.0 |
| 0xd2 | AXI_INCOMPLETE_WRITE_REQUEST | 0 | 0.0 |
| 0xd3 | L2FETCH_COMMAND_PAGE_TERMINATION | 0 | 0.0 |
| 0xd4 | REQUEST_STALL_WRITE_BUFFER_EXHAUSTION | 0 | 0.0 |
| 0xd5 | L2_DU_STORE_COALESCE | 14111 | 0.030809 |
| 0xd6 | L2_STORE_LINK | 93655 | 0.204479 |
| 0xd7 | L2_SCOREBOARD_70_PERCENT_FULL | 172728 | 0.377121 |
| 0xd8 | L2_SCOREBOARD_80_PERCENT_FULL | 39797 | 0.08689 |
| 0xd9 | L2_SCOREBOARD_90_PERCENT_FULL | 35 | 7.6e-05 |
| 0xda | L2_SCOREBOARD_FULL_REJECT | 0 | 0.0 |
| 0xdb | L2_DU_RETURN_REPLAYED | 0 | 0.0 |
| 0xdc | L2_EVICTION_BUFFERS_FULL | 1400 | 0.003057 |
| 0xdd | AHB_MULTI_BEAT_READ_REQUEST | 0 | 0.0 |
| 0xdf | L2_DU_LOAD_SECONDARY_MISS_ON_SW_PREFETCH | 0 | 0.0 |
| 0xe0 | L2FETCH_DROP | 0 | 0.0 |
| 0xe1 | REPLAY_MAXIMUM_FORCE | 0 | 0.0 |
| 0xe2 | SCHEDULER_WATCHDOG_FORCE | 0 | 0.0 |
| 0xe3 | LIVELOCK_REFETCH | 0 | 0.0 |
| 0xe4 | CYCLES_LIVELOCK_WARNING | 0 | 0.0 |
| 0xe5 | THREAD_OFF_PVIEW_CYCLES | 3648959 | 7.966864 |
| 0xe6 | ARCH_LOCK_PVIEW_CYCLES | 0 | 0.0 |
| 0xe7 | REDIRECT_PVIEW_CYCLES | 26798 | 0.058509 |
| 0xe8 | IU_NO_PKT_PVIEW_CYCLES | 41584 | 0.090791 |
| 0xe9 | DU_CACHE_MISS_PVIEW_CYCLES | 574027 | 1.253288 |
| 0xea | DU_BUSY_OTHER_PVIEW_CYCLES | 14237 | 0.031084 |
| 0xeb | CU_BUSY_PVIEW_CYCLES | 104476 | 0.228105 |
| 0xec | SMT_DU_CONFLICT_PVIEW_CYCLES | 0 | 0.0 |
| 0xed | COPROC_BUSY_PVIEW_CYCLES | 2422206 | 5.288463 |
| 0xee | DU_UNCACHED_PVIEW_CYCLES | 0 | 0.0 |
| 0xef | SYSTEM_BUSY_PVIEW_CYCLES | 7615 | 0.016626 |
| 0xf0 | APP_REPORTED | 0 | 0.0 |
| 0xff | VOLTAGE_CLOCK_GATING_CYCLES | 0 | 0.0 |
| 0x100 | HVX_ACTIVE | 5329286 | 11.635564 |
| 0x101 | HVX_WAIT_EMPTY | 2664856 | 5.818247 |
| 0x102 | HVX_EMPTY | 2664430 | 5.817317 |
| 0x103 | HVX_WAIT | 2664430 | 5.817317 |
| 0x104 | HVX_REG_ORDER | 12150 | 0.026527 |
| 0x105 | HVX_LD_VTCM_OUTSTANDING | 0 | 0.0 |
| 0x106 | HVX_LD_L2_OUTSTANDING | 1178723 | 2.573535 |
| 0x107 | HVX_ST_VTCM_OUTSTANDING | 0 | 0.0 |
| 0x108 | HVX_ST_L2_OUTSTANDING | 20 | 4.4e-05 |
| 0x109 | HVX_SCATGATH_OUTSTANDING | 0 | 0.0 |
| 0x10a | HVX_SCATGATH_SHARED_FULL | 0 | 0.0 |
| 0x10b | HVX_ST_L2_SHARED_FULL | 0 | 0.0 |
| 0x10c | HVX_ST_ST_BANK_CONFLICT | 0 | 0.0 |
| 0x10d | HVX_VTCM_BANDWIDTH_OVER | 0 | 0.0 |
| 0x10e | HVX_OTHER_PART_OUTSTANDING | 1036523 | 2.263067 |
| 0x10f | HVX_VOLTAGE_VIRUS_OVER | 0 | 0.0 |
| 0x110 | HVX_VOLTAGE_UNDER | 0 | 0.0 |
| 0x111 | HVX_POWER_OVER | 0 | 0.0 |
| 0x112 | HVX_PARTIAL_PKT | 0 | 0.0 |
| 0x113 | HVX_PKT | 0 | 0.0 |
| 0x114 | HVX_ST_DWR_BANK_CONFLICT | 0 | 0.0 |
| 0x118 | HVX_PKT_THREAD | 2 | 4e-06 |
| 0x119 | HVX_CORE_VFIFO_FULL_STALL | 9465 | 0.020665 |
| 0x11a | HVX_L2_STORE_ACCESS | 0 | 0.0 |
| 0x11b | HVX_L2_STORE_MISS | 48554 | 0.106009 |
| 0x11c | HVX_L2_LOAD_ACCESS | 0 | 0.0 |
| 0x11d | HVX_L2_LOAD_MISS | 48692 | 0.10631 |
| 0x11e | HVX_L2_LOAD_SECONDARY_MISS | 25558 | 0.055801 |
| 0x11f | HVX_TCM_STORE_ACCESS | 0 | 0.0 |
| 0x120 | HVX_TCM_LOAD_ACCESS | 0 | 0.0 |
| 0x128 | VTCM_VECTOR_EXHAUSTED | 2445498 | 5.339317 |
| 0x129 | VTCM_VECTOR_SCALAR_ORDER | 0 | 0.0 |
| 0x12a | VTCM_VECTOR_LD_GATH_ORDER | 0 | 0.0 |
| 0x12b | VTCM_VECTOR_LD_ST_ORDER | 0 | 0.0 |
| 0x12c | VTCM_VECTOR_LD_FULL | 0 | 0.0 |
| 0x12d | VTCM_VECTOR_SCATGATH_FULL | 0 | 0.0 |
| 0x12e | VTCM_VECTOR_ST_FULL | 0 | 0.0 |
| 0x12f | VTCM_VECTOR_LD_ST_BANK_CONFLICT | 0 | 0.0 |
| 0x130 | VTCM_VECTOR_LD_LD_BANK_CONFLICT | 0 | 0.0 |
| 0x131 | VTCM_VECTOR_LD_PARTIAL_PKT | 0 | 0.0 |
| 0x132 | VTCM_VECTOR_ST_PARTIAL_PKT | 0 | 0.0 |
| 0x133 | VTCM_VECTOR_LD_PKT | 0 | 0.0 |
| 0x134 | VTCM_VECTOR_ST_PKT | 0 | 0.0 |
| 0x135 | VTCM_VECTOR_SCATGATH_PKT | 0 | 0.0 |
| 0x136 | VTCM_VECTOR_NULL_PKT | 0 | 0.0 |
| 0x137 | VTCM_VECTOR_LD_DWR_BANK_CONFLICT | 0 | 0.0 |
| 0x138 | VTCM_VECTOR_LD_DWR_ORDER | 0 | 0.0 |
| 0x139 | VTCM_SCALAR_ACTIVE | 0 | 0.0 |
| 0x13a | VTCM_SCALAR_EMPTY | 0 | 0.0 |
| 0x13b | VTCM_SCALAR_PORT_CONFLICT | 0 | 0.0 |
| 0x13c | VTCM_SCALAR_ST_ORDER | 0 | 0.0 |
| 0x13d | VTCM_SCALAR_VECTOR_ORDER | 0 | 0.0 |
| 0x13e | VTCM_SCALAR_LD_OUTSTANDING | 0 | 0.0 |
| 0x13f | VTCM_SCALAR_LD_SHARED_FULL | 0 | 0.0 |
| 0x140 | VTCM_SCALAR_BANK_CONFLICT | 0 | 0.0 |
| 0x141 | VTCM_SCALAR_LD_PIPELINE_CONFLICT | 0 | 0.0 |
| 0x142 | VTCM_SCALAR_BANDWIDTH_OVER | 0 | 0.0 |
| 0x143 | VTCM_SCALAR_LD | 0 | 0.0 |
| 0x144 | VTCM_SCALAR_LDHIT | 0 | 0.0 |
| 0x145 | VTCM_SCALAR_ST | 0 | 0.0 |
| 0x146 | VTCM_SCALAR_DWR | 0 | 0.0 |
| 0x148 | SCATGATH_SB_ACTIVE | 0 | 0.0 |
| 0x149 | SCATGATH_SB_WAIT_EMPTY | 0 | 0.0 |
| 0x14a | SCATGATH_SB_EMPTY | 0 | 0.0 |
| 0x14b | SCATGATH_SB_WAIT | 0 | 0.0 |
| 0x14c | SCATGATH_SB_OUTSTANDING | 0 | 0.0 |
| 0x14d | SCATGATH_SB | 0 | 0.0 |
| 0x150 | SCATGATH_IN_EMPTY | 0 | 0.0 |
| 0x151 | SCATGATH_IN_OUTSTANDING | 0 | 0.0 |
| 0x152 | SCATGATH_IN | 0 | 0.0 |
| 0x158 | HVX_VREG_RD_EARLY_WR_1PKT | 19972 | 0.043605 |
| 0x159 | HVX_VREG_RD_EARLY_WR_2PKT | 12640 | 0.027597 |
| 0x15a | HVX_VREG_RD_EARLY_WR | 530234 | 1.157673 |
| 0x15b | HVX_VREG_RD_LATE_WR_1PKT | 12150 | 0.026527 |
| 0x15c | HVX_VREG_RD_LATE_WR_2PKT | 0 | 0.0 |
| 0x15d | HVX_VREG_RD_LATE_WR | 558960 | 1.220391 |
| 0x15e | HVX_VREG_WR_EARLY_WR_1PKT | 0 | 0.0 |
| 0x15f | HVX_VREG_WR_EARLY_WR_2PKT | 48600 | 0.10611 |
| 0x160 | HVX_VREG_WR_EARLY_WR | 501834 | 1.095667 |
| 0x161 | HVX_VREG_WR_LAT_WR_1PKT | 0 | 0.0 |
| 0x162 | HVX_VREG_WR_LATE_WR_2PKT | 48600 | 0.10611 |
| 0x163 | HVX_VREG_WR_LATE_WR | 510364 | 1.11429 |
| 0x165 | HVX_MAX_VOLT_UNDERSHOOT | 0 | 0.0 |
| 0x166 | VDWB_EMPTY | 0 | 0.0 |
| 0x167 | VDWB_WR_ST_BANK_CONFLICT | 0 | 0.0 |
| 0x168 | VDWB_WR | 0 | 0.0 |
| 0x169 | DMA_WR_SB_VALID | 0 | 0.0 |
| 0x16a | HVX_VREG_RD_EARLY_WR_3PKT | 20414 | 0.04457 |
| 0x16b | HVX_VREG_RD_LATE_WR_3PKT | 12150 | 0.026527 |
| 0x16c | HVX_VREG_WR_EARLY_WR_3PKT | 8530 | 0.018624 |
| 0x16d | HVX_VREG_WR_LATE_WR_3PKT | 0 | 0.0 |
| Name | Count | Equation |
|---|---|---|
| 1T_CPP | 7.966881 | PMU(PE_CYCLES_1_THREAD_RUNNING)/PMU(PE_COMMITTED_PKT_1_THREAD_RUNNING) |
| 2T_CPP | 0.0 | PMU(PE_CYCLES_2_THREAD_RUNNING)/PMU(PE_COMMITTED_PKT_2_THREAD_RUNNING) |
| 3T_CPP | 0.0 | PMU(PE_CYCLES_3_THREAD_RUNNING)/PMU(PE_COMMITTED_PKT_3_THREAD_RUNNING) |
| 4T_CPP | 0.0 | PMU(PE_CYCLES_4_THREAD_RUNNING)/PMU(PE_COMMITTED_PKT_4_THREAD_RUNNING) |
| AXI_READ_BYTES | 3343104 | PMU(PE_AXI_LINE32_READ_REQUEST)*32 + PMU(PE_AXI_LINE64_READ_REQUEST)*64 + (PMU(PE_AXI_READ_REQUEST) - PMU(PE_AXI_LINE32_READ_REQUEST) - PMU(PE_AXI_LINE64_READ_REQUEST))*8 |
| AXI_WRITE_BYTES | 3699968 | PMU(PE_AXI2_LINE32_WRITE_REQUEST)*32 + PMU(PE_AXI_LINE64_WRITE_REQUEST)*64 + (PMU(PE_AXI_WRITE_REQUEST) - PMU(PE_AXI2_LINE32_WRITE_REQUEST) - PMU(PE_AXI_LINE64_WRITE_REQUEST))*8 |
| COMMITED_COMPLEX_ALU | 19694 | PMU(PE_COMMITTED_INSTS) - (PMU(PE_COMMITTED_LOADS) + PMU(PE_COMMITTED_STORES) + PMU(PE_COMMITTED_MEMOPS) + PMU(PE_COMMITTED_TC1_INSTS)) - PMU(PE_COMMITTED_PROGRAM_FLOW_INSTS) |
| COMMITED_UNCOND_BRANCHES | 19549 | PMU(PE_COMMITTED_PROGRAM_FLOW_INSTS) - PMU(PE_COMMITTED_PKT_ENDLOOP) - PMU(PE_COMMITTED_BIMODAL_BRANCH_INSTS) |
| COMMITTED_0_PKTS | 3190942 | DS(TOTAL_PCYCLES) - DS(COMMITTED_1_PKTS) - PMU(PE_COMMITTED_PKT_SMT) |
| COMMITTED_1_PKTS | 458017 | PMU(PE_COMMITTED_PKT_ANY) - 2*PMU(PE_COMMITTED_PKT_SMT) |
| CPP | 7.966864 | (DS(TOTAL_PCYCLES))/PMU(PE_COMMITTED_PKT_ANY) |
| DCACHE_PRIMARY_MISS | 3273 | PMU(PE_DCACHE_DEMAND_MISS) - PMU(PE_DU_DEMAND_SECONDARY_MISS) |
| DFETCH_FILLED | 0 | PMU(PE_DCFETCH_COMMITTED) - PMU(PE_DCFETCH_HIT) |
| INSTR_PER_PACKET | 1.894176 | PMU(PE_COMMITTED_INSTS)/PMU(PE_COMMITTED_PKT_ANY) |
| IPC | 0.276697 | (PMU(PE_COMMITTED_INSTS) + PMU(PE_COMMITTED_PKT_ENDLOOP)*2)/DS(TOTAL_PCYCLES) |
| L2_CACHE_DU_DEMAND_MISS | 2064 | PMU(PE_L2_DU_READ_MISS) - PMU(PE_L2_DU_PREFETCH_MISS) |
| L2_CACHE_IU_DEMAND_MISS | 127 | PMU(PE_L2_IU_MISS) - PMU(PE_L2_IU_PREFETCH_MISS) |
| L2_EVICTIONS | 57736 | (PMU(PE_AXI_WRITE_REQUEST) - PMU(PE_L2_DU_STORE_MISS)) |
| TOTAL_BUS_READS | 52236 | PMU(PE_AXI_READ_REQUEST) + PMU(PE_AHB_READ_REQUEST) + PMU(PE_AXI2_READ_REQUEST) |
| TOTAL_BUS_WRITES | 57812 | PMU(PE_AXI_WRITE_REQUEST) + PMU(PE_AHB_WRITE_REQUEST) + PMU(PE_AXI2_WRITE_REQUEST) |
| TOTAL_PCYCLES | 3648959 | (PMU(PE_CYCLES_1_THREAD_RUNNING)+PMU(PE_CYCLES_2_THREAD_RUNNING)+PMU(PE_CYCLES_3_THREAD_RUNNING)+PMU(PE_CYCLES_4_THREAD_RUNNING)) |
| # | Tag | Syntax | Count | Pct |
|---|---|---|---|---|
| 0 | A2_addi | Rd32=add(Rs32,#s16) | 427411 | 11.503% |
| 1 | J2_endloop0 | endloop0 | 361197 | 9.721% |
| 2 | A2_nop | nop | 323491 | 8.706% |
| 3 | V6_vabsdiffuh | Vd32.uh=vabsdiff(Vu32.uh,Vv32.uh) | 291600 | 7.848% |
| 4 | V6_vL32Ub_pi | Vd32=vmemu(Rx32++#s3) | 218700 | 5.886% |
| 5 | V6_vaddh_dv | Vdd32.h=vadd(Vuu32.h,Vvv32.h) | 218700 | 5.886% |
| 6 | V6_vassign | Vd32=Vu32 | 218700 | 5.886% |
| 7 | Y2_dccleaninva | dccleaninva(Rs32) | 211508 | 5.692% |
| 8 | V6_vminuh | Vd32.uh=vmin(Vu32.uh,Vv32.uh) | 145800 | 3.924% |
| 9 | V6_vmpabus | Vdd32.h=vmpa(Vuu32.ub,Rt32.b) | 145800 | 3.924% |
| 10 | V6_vzb | Vdd32.uh=vzxt(Vu32.ub) | 145800 | 3.924% |
| 11 | V6_vtmpybus | Vdd32.h=vtmpy(Vuu32.ub,Rt32.b) | 109350 | 2.943% |
| 12 | V6_vS32Ub_pi | vmemu(Rx32++#s3)=Vs32 | 72901 | 1.962% |
| 13 | V6_vshuffeb | Vd32.b=vshuffe(Vu32.b,Vv32.b) | 72900 | 1.962% |
| 14 | Y2_dczeroa | dczeroa(Rs32) | 65440 | 1.761% |
| 15 | Y2_dcinva | dcinva(Rs32) | 64800 | 1.744% |
| 16 | L2_loadri_io | Rd32=memw(Rs32+#s11:2) | 42789 | 1.152% |
| 17 | A2_tfr | Rd32=Rs32 | 40925 | 1.101% |
| 18 | V6_vL32b_ai | Vd32=vmem(Rt32+#s4) | 36450 | 0.981% |
| 19 | A2_andir | Rd32=and(Rs32,#s10) | 23233 | 0.625% |
| 20 | A2_subri | Rd32=sub(#s10,Rs32) | 23002 | 0.619% |
| 21 | A2_add | Rd32=add(Rs32,Rt32) | 22483 | 0.605% |
| 22 | SL2_loadrd_sp | Rdd8=memd(r29+#u5:3) | 20443 | 0.550% |
| 23 | J2_call | call #r22:2 | 20263 | 0.545% |
| 24 | C2_cmpeqi | Pd4=cmp.eq(Rs32,#s10) | 18674 | 0.503% |
| 25 | V6_vL32b_cur_ai | Vd32.cur=vmem(Rt32+#s4) | 18225 | 0.490% |
| 26 | J4_cmpeqi_tp0_jump_nt | p0=cmp.eq(Rs16,#U5); if (p0.new) jump:nt #r9:2 | 15811 | 0.426% |
| 27 | J2_jump | jump #r22:2 | 14926 | 0.402% |
| 28 | S2_storerd_io | memd(Rs32+#s11:3)=Rtt32 | 13949 | 0.375% |
| 29 | J2_jumpr | jumpr Rs32 | 13059 | 0.351% |
| 30 | J2_loop0r | loop0(#r7:2,Rs32) | 12230 | 0.329% |
| 31 | S2_lsr_i_r | Rd32=lsr(Rs32,#u5) | 12132 | 0.327% |
| 32 | S4_addaddi | Rd32=add(Rs32,add(Ru32,#s6)) | 10824 | 0.291% |
| 33 | S2_storeri_io | memw(Rs32+#s11:2)=Rt32 | 10229 | 0.275% |
| 34 | A2_tfrsi | Rd32=#s16 | 10134 | 0.273% |
| 35 | A2_sub | Rd32=sub(Rt32,Rs32) | 9907 | 0.267% |
| 36 | L2_loadrb_io | Rd32=memb(Rs32+#s11:0) | 9007 | 0.242% |
| 37 | Y2_crswap0 | crswap(Rx32,sgp0) | 8686 | 0.234% |
| 38 | L2_loadrd_pi | Rdd32=memd(Rx32++#s4:3) | 8652 | 0.233% |
| 39 | S2_pstorerdt_pi | if (Pv4) memd(Rx32++#s4:3)=Rtt32 | 8652 | 0.233% |
| 40 | SS2_stored_sp | memd(r29+#s6:3)=Rtt8 | 8116 | 0.218% |
| 41 | A2_combinew | Rdd32=combine(Rs32,Rt32) | 7687 | 0.207% |
| 42 | J2_jumpt | if (Pu4) jump:nt #r15:2 | 7531 | 0.203% |
| 43 | L2_loadrigp | Rd32=memw(gp+#u16:2) | 7046 | 0.190% |
| 44 | SS2_storew_sp | memw(r29+#u5:2)=Rt16 | 6518 | 0.175% |
| 45 | J2_jumptnew | if (Pu4.new) jump:nt #r15:2 | 6442 | 0.173% |
| 46 | S2_storerinew_io | memw(Rs32+#s11:2)=Nt8.new | 6020 | 0.162% |
| 47 | SL2_return | dealloc_return | 5943 | 0.160% |
| 48 | SA1_addsp | Rd16=add(r29,#u6:2) | 5698 | 0.153% |
| 49 | SS2_allocframe | allocframe(#u5:3) | 5349 | 0.144% |
| 50 | L2_ploadrit_io | if (Pt4) Rd32=memw(Rs32+#u6:2) | 4860 | 0.131% |
| 51 | A2_tfrih | Rx.H32=#u16 | 4412 | 0.119% |
| 52 | A2_tfrcrr | Rd32=Cs32 | 4345 | 0.117% |
| 53 | A2_tfrrcr | Cd32=Rs32 | 4345 | 0.117% |
| 54 | J2_rte | rte | 4343 | 0.117% |
| 55 | J2_trap0 | trap0(#u8) | 4334 | 0.117% |
| 56 | J4_cmpeqn1_fp0_jump_t | p0=cmp.eq(Rs16,#-1); if (!p0.new) jump:t #r9:2 | 4328 | 0.116% |
| 57 | S2_asl_i_r_nac | Rx32-=asl(Rs32,#u5) | 4318 | 0.116% |
| 58 | C2_cmpeq | Pd4=cmp.eq(Rs32,Rt32) | 3857 | 0.104% |
| 59 | A2_paddfnew | if (!Pu4.new) Rd32=add(Rs32,Rt32) | 3848 | 0.104% |
| 60 | C2_cmpgt | Pd4=cmp.gt(Rs32,Rt32) | 3292 | 0.089% |
| 61 | C2_not | Pd4=not(Ps4) | 3261 | 0.088% |
| 62 | SA1_tfr | Rd16=Rs16 | 3254 | 0.088% |
| 63 | C2_cmpgtui | Pd4=cmp.gtu(Rs32,#u9) | 2972 | 0.080% |
| 64 | J4_cmpeqi_fp0_jump_t | p0=cmp.eq(Rs16,#U5); if (!p0.new) jump:t #r9:2 | 2847 | 0.077% |
| 65 | A4_combineir | Rdd32=combine(#s8,Rs32) | 2752 | 0.074% |
| 66 | A2_or | Rd32=or(Rs32,Rt32) | 2631 | 0.071% |
| 67 | SA1_cmpeqi | p0=cmp.eq(Rs16,#u2) | 2520 | 0.068% |
| 68 | A2_paddit | if (Pu4) Rd32=add(Rs32,#s8) | 2436 | 0.066% |
| 69 | M2_mpyi | Rd32=mpyi(Rs32,Rt32) | 2401 | 0.065% |
| 70 | S2_allocframe | allocframe(Rx32,#u11:3):raw | 2391 | 0.064% |
| 71 | SL2_jumpr31_tnew | if (p0.new) jumpr:nt r31 | 2346 | 0.063% |
| 72 | M2_acci | Rx32+=add(Rs32,Rt32) | 2198 | 0.059% |
| 73 | J2_jumpf | if (!Pu4) jump:nt #r15:2 | 1724 | 0.046% |
| 74 | C2_ccombinewt | if (Pu4) Rdd32=combine(Rs32,Rt32) | 1622 | 0.044% |
| 75 | A2_paddifnew | if (!Pu4.new) Rd32=add(Rs32,#s8) | 1618 | 0.044% |
| 76 | J2_jumprf | if (!Pu4) jumpr:nt Rs32 | 1509 | 0.041% |
| 77 | A2_padditnew | if (Pu4.new) Rd32=add(Rs32,#s8) | 1490 | 0.040% |
| 78 | C2_cmpgti | Pd4=cmp.gt(Rs32,#s10) | 1405 | 0.038% |
| 79 | L4_return | Rdd32=dealloc_return(Rs32):raw | 1401 | 0.038% |
| 80 | C2_andn | Pd4=and(Pt4,!Ps4) | 1358 | 0.037% |
| 81 | C2_bitsclri | Pd4=bitsclr(Rs32,#u6) | 1283 | 0.035% |
| 82 | C4_cmpneq | Pd4=!cmp.eq(Rs32,Rt32) | 1270 | 0.034% |
| 83 | C4_cmpneqi | Pd4=!cmp.eq(Rs32,#s10) | 1270 | 0.034% |
| 84 | J2_ploop1sr | p3=sp1loop0(#r7:2,Rs32) | 1270 | 0.034% |
| 85 | Y2_dcfetchbo | dcfetch(Rs32+#u11:3) | 1270 | 0.034% |
| 86 | M2_maci | Rx32+=mpyi(Rs32,Rt32) | 1120 | 0.030% |
| 87 | A2_min | Rd32=min(Rt32,Rs32) | 1089 | 0.029% |
| 88 | SA1_clrtnew | if (p0.new) Rd16=#0 | 1088 | 0.029% |
| 89 | J2_jumpfnew | if (!Pu4.new) jump:nt #r15:2 | 1046 | 0.028% |
| 90 | S2_addasl_rrri | Rd32=addasl(Rt32,Rs32,#u3) | 895 | 0.024% |
| 91 | L2_ploadritnew_io | if (Pt4.new) Rd32=memw(Rs32+#u6:2) | 870 | 0.023% |
| 92 | SL1_loadri_io | Rd16=memw(Rs16+#u4:2) | 860 | 0.023% |
| 93 | C2_cmpgtu | Pd4=cmp.gtu(Rs32,Rt32) | 822 | 0.022% |
| 94 | C2_and | Pd4=and(Pt4,Ps4) | 810 | 0.022% |
| 95 | J2_endloop1 | endloop1 | 810 | 0.022% |
| 96 | L2_loadw_locked | Rd32=memw_locked(Rs32) | 649 | 0.017% |
| 97 | C2_muxii | Rd32=mux(Pu4,#s8,#S8) | 638 | 0.017% |
| 98 | L2_loadrub_io | Rd32=memub(Rs32+#s11:0) | 604 | 0.016% |
| 99 | SA1_seti | Rd16=#u6 | 569 | 0.015% |
| 100 | A2_combineii | Rdd32=combine(#s8,#S8) | 558 | 0.015% |
| 101 | C2_vmux | Rdd32=vmux(Pu4,Rss32,Rtt32) | 542 | 0.015% |
| 102 | Y2_tfrscrr | Rd32=Ss64 | 539 | 0.015% |
| 103 | S2_lsr_i_p | Rdd32=lsr(Rss32,#u6) | 522 | 0.014% |
| 104 | S4_storeiritnew_io | if (Pv4.new) memw(Rs32+#u6:2)=#S6 | 522 | 0.014% |
| 105 | S2_storew_locked | memw_locked(Rs32,Pd4)=Rt32 | 449 | 0.012% |
| 106 | C2_cmovenewif | if (!Pu4.new) Rd32=#s12 | 429 | 0.012% |
| 107 | L2_loadrd_io | Rdd32=memd(Rs32+#s11:3) | 427 | 0.011% |
| 108 | S2_cl0 | Rd32=cl0(Rs32) | 402 | 0.011% |
| 109 | S4_addi_asl_ri | Rx32=add(#u8,asl(Rx32,#U5)) | 402 | 0.011% |
| 110 | A2_psubfnew | if (!Pu4.new) Rd32=sub(Rt32,Rs32) | 394 | 0.011% |
| 111 | S2_lsr_i_vw | Rdd32=vlsrw(Rss32,#u5) | 394 | 0.011% |
| 112 | L2_loadruh_io | Rd32=memuh(Rs32+#s11:1) | 387 | 0.010% |
| 113 | J4_cmpeqi_t_jumpnv_nt | if (cmp.eq(Ns8.new,#U5)) jump:nt #r9:2 | 386 | 0.010% |
| 114 | S4_ntstbit_i | Pd4=!tstbit(Rs32,#u5) | 373 | 0.010% |
| 115 | A2_sxth | Rd32=sxth(Rs32) | 371 | 0.010% |
| 116 | J4_cmpltu_f_jumpnv_t | if (!cmp.gtu(Rt32,Ns8.new)) jump:t #r9:2 | 371 | 0.010% |
| 117 | J2_jumptnewpt | if (Pu4.new) jump:t #r15:2 | 370 | 0.010% |
| 118 | S2_extractu | Rd32=extractu(Rs32,#u5,#U5) | 370 | 0.010% |
| 119 | SL2_loadri_sp | Rd16=memw(r29+#u5:2) | 351 | 0.009% |
| 120 | L2_loadrb_pi | Rd32=memb(Rx32++#s4:0) | 347 | 0.009% |
| 121 | J4_cmpeq_t_jumpnv_nt | if (cmp.eq(Ns8.new,Rt32)) jump:nt #r9:2 | 321 | 0.009% |
| 122 | S2_asl_i_r | Rd32=asl(Rs32,#u5) | 321 | 0.009% |
| 123 | C2_cmpgtup | Pd4=cmp.gtu(Rss32,Rtt32) | 295 | 0.008% |
| 124 | A2_subp | Rdd32=sub(Rtt32,Rss32) | 285 | 0.008% |
| 125 | S4_storeiri_io | memw(Rs32+#u6:2)=#S8 | 285 | 0.008% |
| 126 | A2_addp | Rdd32=add(Rss32,Rtt32) | 259 | 0.007% |
| 127 | C2_cmovenewit | if (Pu4.new) Rd32=#s12 | 258 | 0.007% |
| 128 | S2_asr_i_r | Rd32=asr(Rs32,#u5) | 256 | 0.007% |
| 129 | S2_pstorerbt_pi | if (Pv4) memb(Rx32++#s4:0)=Rt32 | 255 | 0.007% |
| 130 | C2_bitsclr | Pd4=bitsclr(Rs32,Rt32) | 252 | 0.007% |
| 131 | A2_tfril | Rx.L32=#u16 | 250 | 0.007% |
| 132 | J4_cmpgti_f_jumpnv_nt | if (!cmp.gt(Ns8.new,#U5)) jump:nt #r9:2 | 248 | 0.007% |
| 133 | J4_cmpgti_tp0_jump_t | p0=cmp.gt(Rs16,#U5); if (p0.new) jump:t #r9:2 | 228 | 0.006% |
| 134 | J2_jumpfnewpt | if (!Pu4.new) jump:t #r15:2 | 224 | 0.006% |
| 135 | L2_loadrubgp | Rd32=memub(gp+#u16:0) | 220 | 0.006% |
| 136 | SL2_loadruh_io | Rd16=memuh(Rs16+#u3:1) | 217 | 0.006% |
| 137 | A2_svsubh | Rd32=vsubh(Rt32,Rs32) | 216 | 0.006% |
| 138 | SL2_deallocframe | deallocframe | 215 | 0.006% |
| 139 | S2_storerb_io | memb(Rs32+#s11:0)=Rt32 | 212 | 0.006% |
| 140 | SL1_loadrub_io | Rd16=memub(Rs16+#u4:0) | 208 | 0.006% |
| 141 | A4_cmpbeqi | Pd4=cmpb.eq(Rs32,#u8) | 202 | 0.005% |
| 142 | J4_cmpgti_f_jumpnv_t | if (!cmp.gt(Ns8.new,#U5)) jump:t #r9:2 | 202 | 0.005% |
| 143 | A2_sxtb | Rd32=sxtb(Rs32) | 201 | 0.005% |
| 144 | S4_storeirh_io | memh(Rs32+#u6:1)=#S8 | 200 | 0.005% |
| 145 | SL2_jumpr31_t | if (p0) jumpr r31 | 197 | 0.005% |
| 146 | S2_lsl_r_vw | Rdd32=vlslw(Rss32,Rt32) | 196 | 0.005% |
| 147 | J2_callr | callr Rs32 | 189 | 0.005% |
| 148 | L4_ploadrubfnew_abs | if (!Pt4.new) Rd32=memub(#u6) | 182 | 0.005% |
| 149 | C2_cmoveif | if (!Pu4) Rd32=#s12 | 180 | 0.005% |
| 150 | SA1_combinezr | Rdd8=combine(#0,Rs16) | 180 | 0.005% |
| 151 | J4_cmpeqi_tp0_jump_t | p0=cmp.eq(Rs16,#U5); if (p0.new) jump:t #r9:2 | 179 | 0.005% |
| 152 | L2_deallocframe | Rdd32=deallocframe(Rs32):raw | 178 | 0.005% |
| 153 | S4_pstorerifnew_io | if (!Pv4.new) memw(Rs32+#u6:2)=Rt32 | 178 | 0.005% |
| 154 | J4_cmpgtu_t_jumpnv_t | if (cmp.gtu(Ns8.new,Rt32)) jump:t #r9:2 | 176 | 0.005% |
| 155 | A2_minu | Rd32=minu(Rt32,Rs32) | 175 | 0.005% |
| 156 | C2_muxir | Rd32=mux(Pu4,Rs32,#s8) | 175 | 0.005% |
| 157 | L4_add_memopw_io | memw(Rs32+#u6:2)+=Rt32 | 175 | 0.005% |
| 158 | A2_psubf | if (!Pu4) Rd32=sub(Rt32,Rs32) | 174 | 0.005% |
| 159 | L2_ploadrbfnew_io | if (!Pt4.new) Rd32=memb(Rs32+#u6:0) | 174 | 0.005% |
| 160 | Y2_isync | isync | 164 | 0.004% |
| 161 | SA1_inc | Rd16=add(Rs16,#1) | 162 | 0.004% |
| 162 | Y2_tlbw | tlbw(Rss32,Rt32) | 133 | 0.004% |
| 163 | J2_jumprzpt | if (Rs32!=#0) jump:t #r13:2 | 127 | 0.003% |
| 164 | S2_storerhnew_io | memh(Rs32+#s11:1)=Nt8.new | 117 | 0.003% |
| 165 | SA1_dec | Rd16=add(Rs16,#-1) | 114 | 0.003% |
| 166 | F2_dfcmpuo | Pd4=dfcmp.uo(Rss32,Rtt32) | 98 | 0.003% |
| 167 | J4_cmpgti_fp0_jump_nt | p0=cmp.gt(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 92 | 0.002% |
| 168 | V6_vcombine | Vdd32=vcombine(Vu32,Vv32) | 90 | 0.002% |
| 169 | S2_tstbit_i | Pd4=tstbit(Rs32,#u5) | 87 | 0.002% |
| 170 | F2_dfcmpeq | Pd4=dfcmp.eq(Rss32,Rtt32) | 84 | 0.002% |
| 171 | S2_storerh_io | memh(Rs32+#s11:1)=Rt32 | 80 | 0.002% |
| 172 | L2_loadrh_io | Rd32=memh(Rs32+#s11:1) | 77 | 0.002% |
| 173 | S2_storerbnew_pi | memb(Rx32++#s4:0)=Nt8.new | 75 | 0.002% |
| 174 | L2_loadrub_pi | Rd32=memub(Rx32++#s4:0) | 73 | 0.002% |
| 175 | S2_cl0p | Rd32=cl0(Rss32) | 72 | 0.002% |
| 176 | S2_lsl_r_p | Rdd32=lsl(Rss32,Rt32) | 72 | 0.002% |
| 177 | A2_and | Rd32=and(Rs32,Rt32) | 70 | 0.002% |
| 178 | S4_or_andix | Rx32=or(Ru32,and(Rx32,#s10)) | 62 | 0.002% |
| 179 | A2_abs | Rd32=abs(Rs32) | 56 | 0.002% |
| 180 | J4_cmpeqi_f_jumpnv_t | if (!cmp.eq(Ns8.new,#U5)) jump:t #r9:2 | 55 | 0.001% |
| 181 | A2_absp | Rdd32=abs(Rss32) | 54 | 0.001% |
| 182 | C2_xor | Pd4=xor(Ps4,Pt4) | 52 | 0.001% |
| 183 | L4_loadrub_rr | Rd32=memub(Rs32+Rt32<<#u2) | 52 | 0.001% |
| 184 | SS1_storew_io | memw(Rs16+#u4:2)=Rt16 | 51 | 0.001% |
| 185 | C4_nbitsclr | Pd4=!bitsclr(Rs32,Rt32) | 50 | 0.001% |
| 186 | M2_dpmpyuu_s0 | Rdd32=mpyu(Rs32,Rt32) | 50 | 0.001% |
| 187 | SA1_sxth | Rd16=sxth(Rs16) | 42 | 0.001% |
| 188 | SL2_jumpr31 | jumpr r31 | 41 | 0.001% |
| 189 | J4_cmpeq_fp0_jump_t | p0=cmp.eq(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 40 | 0.001% |
| 190 | SL2_loadrb_io | Rd16=memb(Rs16+#u3:0) | 40 | 0.001% |
| 191 | J4_cmpgt_fp0_jump_t | p0=cmp.gt(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 37 | 0.001% |
| 192 | Y2_tfrsrcr | Sd64=Rs32 | 37 | 0.001% |
| 193 | C2_cmpgtp | Pd4=cmp.gt(Rss32,Rtt32) | 36 | 0.001% |
| 194 | J2_jumprt | if (Pu4) jumpr:nt Rs32 | 35 | 0.001% |
| 195 | J4_jumpseti | Rd16=#U6 ; jump #r9:2 | 35 | 0.001% |
| 196 | L2_ploadrdfnew_io | if (!Pt4.new) Rdd32=memd(Rs32+#u6:3) | 34 | 0.001% |
| 197 | L4_loadri_ur | Rd32=memw(Rt32<<#u2+#U6) | 34 | 0.001% |
| 198 | S2_storerbnew_io | memb(Rs32+#s11:0)=Nt8.new | 34 | 0.001% |
| 199 | S2_asl_i_r_or | Rx32|=asl(Rs32,#u5) | 33 | 0.001% |
| 200 | A2_negp | Rdd32=neg(Rss32) | 32 | 0.001% |
| 201 | J4_cmpgti_tp0_jump_nt | p0=cmp.gt(Rs16,#U5); if (p0.new) jump:nt #r9:2 | 29 | 0.001% |
| 202 | J4_cmpgtui_tp0_jump_t | p0=cmp.gtu(Rs16,#U5); if (p0.new) jump:t #r9:2 | 29 | 0.001% |
| 203 | L4_loadrd_rr | Rdd32=memd(Rs32+Rt32<<#u2) | 29 | 0.001% |
| 204 | A2_xor | Rd32=xor(Rs32,Rt32) | 28 | 0.001% |
| 205 | C2_tfrpr | Rd32=Ps4 | 28 | 0.001% |
| 206 | L2_ploadrhfnew_io | if (!Pt4.new) Rd32=memh(Rs32+#u6:1) | 28 | 0.001% |
| 207 | SA1_addi | Rx16=add(Rx16,#s7) | 28 | 0.001% |
| 208 | A2_paddif | if (!Pu4) Rd32=add(Rs32,#s8) | 27 | 0.001% |
| 209 | S2_clrbit_i | Rd32=clrbit(Rs32,#u5) | 27 | 0.001% |
| 210 | J4_cmpgt_f_jumpnv_t | if (!cmp.gt(Ns8.new,Rt32)) jump:t #r9:2 | 26 | 0.001% |
| 211 | J4_jumpsetr | Rd16=Rs16 ; jump #r9:2 | 26 | 0.001% |
| 212 | M2_mnaci | Rx32-=mpyi(Rs32,Rt32) | 26 | 0.001% |
| 213 | C2_cmoveit | if (Pu4) Rd32=#s12 | 24 | 0.001% |
| 214 | J4_cmpeqi_fp0_jump_nt | p0=cmp.eq(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 24 | 0.001% |
| 215 | S2_setbit_i | Rd32=setbit(Rs32,#u5) | 24 | 0.001% |
| 216 | S4_storeirhtnew_io | if (Pv4.new) memh(Rs32+#u6:1)=#S6 | 23 | 0.001% |
| 217 | A2_subh_l16_ll | Rd32=sub(Rt.L32,Rs.L32) | 22 | 0.001% |
| 218 | A4_combineii | Rdd32=combine(#s8,#U6) | 22 | 0.001% |
| 219 | C2_tfrrp | Pd4=Rs32 | 22 | 0.001% |
| 220 | J4_cmplt_f_jumpnv_nt | if (!cmp.gt(Rt32,Ns8.new)) jump:nt #r9:2 | 22 | 0.001% |
| 221 | L4_loadruh_rr | Rd32=memuh(Rs32+Rt32<<#u2) | 22 | 0.001% |
| 222 | L4_loadruh_ur | Rd32=memuh(Rt32<<#u2+#U6) | 22 | 0.001% |
| 223 | S2_insert | Rx32=insert(Rs32,#u5,#U5) | 22 | 0.001% |
| 224 | C4_nbitsclri | Pd4=!bitsclr(Rs32,#u6) | 20 | 0.001% |
| 225 | F2_dfsub | Rdd32=dfsub(Rss32,Rtt32) | 20 | 0.001% |
| 226 | J4_cmplt_t_jumpnv_t | if (cmp.gt(Rt32,Ns8.new)) jump:t #r9:2 | 20 | 0.001% |
| 227 | L2_ploadrifnew_io | if (!Pt4.new) Rd32=memw(Rs32+#u6:2) | 20 | 0.001% |
| 228 | S2_pstorerhnewt_io | if (Pv4) memh(Rs32+#u6:1)=Nt8.new | 20 | 0.001% |
| 229 | S4_storerhnew_rr | memh(Rs32+Ru32<<#u2)=Nt8.new | 20 | 0.001% |
| 230 | J2_jumprfnew | if (!Pu4.new) jumpr:nt Rs32 | 19 | 0.001% |
| 231 | V6_lvsplatw | Vd32=vsplat(Rt32) | 18 | 0.000% |
| 232 | C2_mux | Rd32=mux(Pu4,Rs32,Rt32) | 17 | 0.000% |
| 233 | J4_cmpeqi_tp1_jump_nt | p1=cmp.eq(Rs16,#U5); if (p1.new) jump:nt #r9:2 | 17 | 0.000% |
| 234 | C2_ccombinewnewf | if (!Pu4.new) Rdd32=combine(Rs32,Rt32) | 16 | 0.000% |
| 235 | F2_dfclass | Pd4=dfclass(Rss32,#u5) | 16 | 0.000% |
| 236 | F2_sffma_lib | Rx32+=sfmpy(Rs32,Rt32):lib | 16 | 0.000% |
| 237 | F2_sffms_lib | Rx32-=sfmpy(Rs32,Rt32):lib | 16 | 0.000% |
| 238 | L4_loadw_phys | Rd32=memw_phys(Rs32,Rt32) | 16 | 0.000% |
| 239 | S2_insertp | Rxx32=insert(Rss32,#u6,#U6) | 16 | 0.000% |
| 240 | S2_lsr_i_p_acc | Rxx32+=lsr(Rss32,#u6) | 16 | 0.000% |
| 241 | S4_storerd_rr | memd(Rs32+Ru32<<#u2)=Rtt32 | 16 | 0.000% |
| 242 | J4_cmpgtn1_f_jumpnv_nt | if (!cmp.gt(Ns8.new,#-1)) jump:nt #r9:2 | 15 | 0.000% |
| 243 | L2_ploadruhfnew_io | if (!Pt4.new) Rd32=memuh(Rs32+#u6:1) | 15 | 0.000% |
| 244 | J4_cmpeq_t_jumpnv_t | if (cmp.eq(Ns8.new,Rt32)) jump:t #r9:2 | 14 | 0.000% |
| 245 | J4_cmpgtn1_fp0_jump_nt | p0=cmp.gt(Rs16,#-1); if (!p0.new) jump:nt #r9:2 | 14 | 0.000% |
| 246 | J4_cmpgtn1_tp0_jump_t | p0=cmp.gt(Rs16,#-1); if (p0.new) jump:t #r9:2 | 14 | 0.000% |
| 247 | J4_tstbit0_fp0_jump_t | p0=tstbit(Rs16,#0); if (!p0.new) jump:t #r9:2 | 14 | 0.000% |
| 248 | S4_storeirifnew_io | if (!Pv4.new) memw(Rs32+#u6:2)=#S6 | 14 | 0.000% |
| 249 | C4_cmplte | Pd4=!cmp.gt(Rs32,Rt32) | 13 | 0.000% |
| 250 | J4_cmpltu_t_jumpnv_t | if (cmp.gtu(Rt32,Ns8.new)) jump:t #r9:2 | 13 | 0.000% |
| 251 | A4_vcmpbeq_any | Pd4=any8(vcmpb.eq(Rss32,Rtt32)) | 12 | 0.000% |
| 252 | C2_orn | Pd4=or(Pt4,!Ps4) | 12 | 0.000% |
| 253 | C4_addipc | Rd32=add(pc,#u6) | 12 | 0.000% |
| 254 | L4_ploadrdfnew_rr | if (!Pv4.new) Rdd32=memd(Rs32+Rt32<<#u2) | 12 | 0.000% |
| 255 | M2_mpysip | Rd32=+mpyi(Rs32,#u8) | 12 | 0.000% |
| 256 | S2_storerb_pi | memb(Rx32++#s4:0)=Rt32 | 12 | 0.000% |
| 257 | S4_pstorerbtnew_io | if (Pv4.new) memb(Rs32+#u6:0)=Rt32 | 12 | 0.000% |
| 258 | J4_cmpeqi_f_jumpnv_nt | if (!cmp.eq(Ns8.new,#U5)) jump:nt #r9:2 | 11 | 0.000% |
| 259 | J4_cmpgtu_f_jumpnv_t | if (!cmp.gtu(Ns8.new,Rt32)) jump:t #r9:2 | 11 | 0.000% |
| 260 | J4_cmpgtu_tp0_jump_nt | p0=cmp.gtu(Rs16,Rt16); if (p0.new) jump:nt #r9:2 | 11 | 0.000% |
| 261 | S2_storerigp | memw(gp+#u16:2)=Rt32 | 11 | 0.000% |
| 262 | L2_ploadrubfnew_pi | if (!Pt4.new) Rd32=memub(Rx32++#s4:0) | 10 | 0.000% |
| 263 | L4_loadrh_rr | Rd32=memh(Rs32+Rt32<<#u2) | 10 | 0.000% |
| 264 | J4_cmpeq_f_jumpnv_t | if (!cmp.eq(Ns8.new,Rt32)) jump:t #r9:2 | 9 | 0.000% |
| 265 | J4_cmpgtu_tp0_jump_t | p0=cmp.gtu(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 9 | 0.000% |
| 266 | J4_cmpgtui_fp0_jump_nt | p0=cmp.gtu(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 9 | 0.000% |
| 267 | S4_subaddi | Rd32=add(Rs32,sub(#s6,Ru32)) | 9 | 0.000% |
| 268 | A2_max | Rd32=max(Rs32,Rt32) | 8 | 0.000% |
| 269 | A2_porf | if (!Pu4) Rd32=or(Rs32,Rt32) | 8 | 0.000% |
| 270 | A4_combineri | Rdd32=combine(Rs32,#s8) | 8 | 0.000% |
| 271 | C2_cmpeqp | Pd4=cmp.eq(Rss32,Rtt32) | 8 | 0.000% |
| 272 | F2_conv_d2df | Rdd32=convert_d2df(Rss32) | 8 | 0.000% |
| 273 | F2_dfadd | Rdd32=dfadd(Rss32,Rtt32) | 8 | 0.000% |
| 274 | F2_dfcmpge | Pd4=dfcmp.ge(Rss32,Rtt32) | 8 | 0.000% |
| 275 | J4_cmpgt_fp0_jump_nt | p0=cmp.gt(Rs16,Rt16); if (!p0.new) jump:nt #r9:2 | 8 | 0.000% |
| 276 | L2_ploadrbtnew_io | if (Pt4.new) Rd32=memb(Rs32+#u6:0) | 8 | 0.000% |
| 277 | L2_ploadrdtnew_io | if (Pt4.new) Rdd32=memd(Rs32+#u6:3) | 8 | 0.000% |
| 278 | L2_ploadruhtnew_io | if (Pt4.new) Rd32=memuh(Rs32+#u6:1) | 8 | 0.000% |
| 279 | M2_dpmpyuu_acc_s0 | Rxx32+=mpyu(Rs32,Rt32) | 8 | 0.000% |
| 280 | S2_asl_i_p | Rdd32=asl(Rss32,#u6) | 8 | 0.000% |
| 281 | S2_asl_i_r_acc | Rx32+=asl(Rs32,#u5) | 8 | 0.000% |
| 282 | S2_pstorerdtnew_pi | if (Pv4.new) memd(Rx32++#s4:3)=Rtt32 | 8 | 0.000% |
| 283 | S4_storeirb_io | memb(Rs32+#u6:0)=#S8 | 8 | 0.000% |
| 284 | J4_cmpeq_tp1_jump_nt | p1=cmp.eq(Rs16,Rt16); if (p1.new) jump:nt #r9:2 | 7 | 0.000% |
| 285 | J4_cmpltu_f_jumpnv_nt | if (!cmp.gtu(Rt32,Ns8.new)) jump:nt #r9:2 | 7 | 0.000% |
| 286 | S4_storeri_rr | memw(Rs32+Ru32<<#u2)=Rt32 | 7 | 0.000% |
| 287 | F2_dfcmpgt | Pd4=dfcmp.gt(Rss32,Rtt32) | 6 | 0.000% |
| 288 | J2_loop1r | loop1(#r7:2,Rs32) | 6 | 0.000% |
| 289 | J4_cmpeq_tp0_jump_t | p0=cmp.eq(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 6 | 0.000% |
| 290 | J4_cmpeqi_t_jumpnv_t | if (cmp.eq(Ns8.new,#U5)) jump:t #r9:2 | 6 | 0.000% |
| 291 | J4_cmpeqn1_t_jumpnv_nt | if (cmp.eq(Ns8.new,#-1)) jump:nt #r9:2 | 6 | 0.000% |
| 292 | J4_cmpgt_tp0_jump_t | p0=cmp.gt(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 6 | 0.000% |
| 293 | J4_cmpgtu_tp1_jump_t | p1=cmp.gtu(Rs16,Rt16); if (p1.new) jump:t #r9:2 | 6 | 0.000% |
| 294 | L2_loadrdgp | Rdd32=memd(gp+#u16:3) | 6 | 0.000% |
| 295 | L2_ploadrhtnew_io | if (Pt4.new) Rd32=memh(Rs32+#u6:1) | 6 | 0.000% |
| 296 | L4_loadri_rr | Rd32=memw(Rs32+Rt32<<#u2) | 6 | 0.000% |
| 297 | S2_togglebit_i | Rd32=togglebit(Rs32,#u5) | 6 | 0.000% |
| 298 | S4_pstorerifnew_rr | if (!Pv4.new) memw(Rs32+Ru32<<#u2)=Rt32 | 6 | 0.000% |
| 299 | SS2_storebi0 | memb(Rs16+#u4:0)=#0 | 6 | 0.000% |
| 300 | V6_vS32b_new_ai | vmem(Rt32+#s4)=Os8.new | 6 | 0.000% |
| 301 | A4_rcmpneqi | Rd32=!cmp.eq(Rs32,#s8) | 5 | 0.000% |
| 302 | L2_ploadrif_io | if (!Pt4) Rd32=memw(Rs32+#u6:2) | 5 | 0.000% |
| 303 | L2_ploadrubfnew_io | if (!Pt4.new) Rd32=memub(Rs32+#u6:0) | 5 | 0.000% |
| 304 | L4_or_memoph_io | memh(Rs32+#u6:1)|=Rt32 | 5 | 0.000% |
| 305 | S2_storerinewgp | memw(gp+#u16:2)=Nt8.new | 5 | 0.000% |
| 306 | SS2_storewi0 | memw(Rs16+#u4:2)=#0 | 5 | 0.000% |
| 307 | Y2_tlbp | Rd32=tlbp(Rs32) | 5 | 0.000% |
| 308 | A2_addh_l16_ll | Rd32=add(Rt.L32,Rs.L32) | 4 | 0.000% |
| 309 | F2_conv_df2w_chop | Rd32=convert_df2w(Rss32):chop | 4 | 0.000% |
| 310 | F2_conv_w2df | Rdd32=convert_w2df(Rs32) | 4 | 0.000% |
| 311 | F2_conv_w2sf | Rd32=convert_w2sf(Rs32) | 4 | 0.000% |
| 312 | F2_sffixupd | Rd32=sffixupd(Rs32,Rt32) | 4 | 0.000% |
| 313 | F2_sffixupn | Rd32=sffixupn(Rs32,Rt32) | 4 | 0.000% |
| 314 | F2_sffma_sc | Rx32+=sfmpy(Rs32,Rt32,Pu4):scale | 4 | 0.000% |
| 315 | F2_sfrecipa | Rd32,Pe4=sfrecipa(Rs32,Rt32) | 4 | 0.000% |
| 316 | J4_cmpgti_fp0_jump_t | p0=cmp.gt(Rs16,#U5); if (!p0.new) jump:t #r9:2 | 4 | 0.000% |
| 317 | L4_ploadritnew_abs | if (Pt4.new) Rd32=memw(#u6) | 4 | 0.000% |
| 318 | S2_vsplatrb | Rd32=vsplatb(Rs32) | 4 | 0.000% |
| 319 | S4_pstorerdtnew_io | if (Pv4.new) memd(Rs32+#u6:3)=Rtt32 | 4 | 0.000% |
| 320 | A2_maxu | Rd32=maxu(Rs32,Rt32) | 3 | 0.000% |
| 321 | A2_vcmpbeq | Pd4=vcmpb.eq(Rss32,Rtt32) | 3 | 0.000% |
| 322 | J2_loop0i | loop0(#r7:2,#U10) | 3 | 0.000% |
| 323 | J4_cmpeq_tp0_jump_nt | p0=cmp.eq(Rs16,Rt16); if (p0.new) jump:nt #r9:2 | 3 | 0.000% |
| 324 | L2_ploadrdf_io | if (!Pt4) Rdd32=memd(Rs32+#u6:3) | 3 | 0.000% |
| 325 | L4_loadri_ap | Rd32=memw(Re32=#U6) | 3 | 0.000% |
| 326 | S2_ct0 | Rd32=ct0(Rs32) | 3 | 0.000% |
| 327 | SA1_combine0i | Rdd8=combine(#0,#u2) | 3 | 0.000% |
| 328 | A2_aslh | Rd32=aslh(Rs32) | 2 | 0.000% |
| 329 | A2_paddt | if (Pu4) Rd32=add(Rs32,Rt32) | 2 | 0.000% |
| 330 | A2_psubtnew | if (Pu4.new) Rd32=sub(Rt32,Rs32) | 2 | 0.000% |
| 331 | A2_zxth | Rd32=zxth(Rs32) | 2 | 0.000% |
| 332 | A4_psxthtnew | if (Pu4.new) Rd32=sxth(Rs32) | 2 | 0.000% |
| 333 | C2_ccombinewf | if (!Pu4) Rdd32=combine(Rs32,Rt32) | 2 | 0.000% |
| 334 | C2_muxri | Rd32=mux(Pu4,#s8,Rs32) | 2 | 0.000% |
| 335 | F2_conv_d2sf | Rd32=convert_d2sf(Rss32) | 2 | 0.000% |
| 336 | F2_conv_sf2df | Rdd32=convert_sf2df(Rs32) | 2 | 0.000% |
| 337 | J4_cmpgti_t_jumpnv_t | if (cmp.gt(Ns8.new,#U5)) jump:t #r9:2 | 2 | 0.000% |
| 338 | J4_cmpgti_tp1_jump_t | p1=cmp.gt(Rs16,#U5); if (p1.new) jump:t #r9:2 | 2 | 0.000% |
| 339 | J4_cmpgtn1_fp0_jump_t | p0=cmp.gt(Rs16,#-1); if (!p0.new) jump:t #r9:2 | 2 | 0.000% |
| 340 | J4_cmpgtu_fp0_jump_t | p0=cmp.gtu(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 2 | 0.000% |
| 341 | J4_cmplt_f_jumpnv_t | if (!cmp.gt(Rt32,Ns8.new)) jump:t #r9:2 | 2 | 0.000% |
| 342 | L4_ploadrbtnew_rr | if (Pv4.new) Rd32=memb(Rs32+Rt32<<#u2) | 2 | 0.000% |
| 343 | L4_ploadrdf_rr | if (!Pv4) Rdd32=memd(Rs32+Rt32<<#u2) | 2 | 0.000% |
| 344 | S2_asr_i_p | Rdd32=asr(Rss32,#u6) | 2 | 0.000% |
| 345 | S2_asr_i_r_and | Rx32&=asr(Rs32,#u5) | 2 | 0.000% |
| 346 | S2_asr_i_r_nac | Rx32-=asr(Rs32,#u5) | 2 | 0.000% |
| 347 | S2_pstorerdt_io | if (Pv4) memd(Rs32+#u6:3)=Rtt32 | 2 | 0.000% |
| 348 | S2_pstorerht_io | if (Pv4) memh(Rs32+#u6:1)=Rt32 | 2 | 0.000% |
| 349 | S2_pstorerhtnew_pi | if (Pv4.new) memh(Rx32++#s4:1)=Rt32 | 2 | 0.000% |
| 350 | S2_pstorerif_io | if (!Pv4) memw(Rs32+#u6:2)=Rt32 | 2 | 0.000% |
| 351 | S2_pstoreritnew_pi | if (Pv4.new) memw(Rx32++#s4:2)=Rt32 | 2 | 0.000% |
| 352 | S2_storerbgp | memb(gp+#u16:0)=Rt32 | 2 | 0.000% |
| 353 | S4_pstorerbt_rr | if (Pv4) memb(Rs32+Ru32<<#u2)=Rt32 | 2 | 0.000% |
| 354 | S4_pstorerinewtnew_io | if (Pv4.new) memw(Rs32+#u6:2)=Nt8.new | 2 | 0.000% |
| 355 | S4_pstoreritnew_io | if (Pv4.new) memw(Rs32+#u6:2)=Rt32 | 2 | 0.000% |
| 356 | S4_storeirif_io | if (!Pv4) memw(Rs32+#u6:2)=#S6 | 2 | 0.000% |
| 357 | S4_storerb_rr | memb(Rs32+Ru32<<#u2)=Rt32 | 2 | 0.000% |
| 358 | Y2_syncht | syncht | 2 | 0.000% |
| 359 | J4_cmpgtu_fp0_jump_nt | p0=cmp.gtu(Rs16,Rt16); if (!p0.new) jump:nt #r9:2 | 1 | 0.000% |
| 360 | L4_ior_memoph_io | memh(Rs32+#u6:1)=setbit(#U5) | 1 | 0.000% |
| 361 | S2_pstorerbtnew_pi | if (Pv4.new) memb(Rx32++#s4:0)=Rt32 | 1 | 0.000% |
| 362 | Y2_cswi | cswi(Rs32) | 1 | 0.000% |
| 363 | Y2_l2kill | l2kill | 1 | 0.000% |
| Total Count | 3715623 | 100% | ||
| # | Tag | Syntax | Count | Pct |
|---|---|---|---|---|
| 0 | A2_tfrrcr | Cd32=Rs32 | 4345 | 0.117% |
| 1 | C4_nbitsclri | Pd4=!bitsclr(Rs32,#u6) | 20 | 0.001% |
| 2 | C4_nbitsclr | Pd4=!bitsclr(Rs32,Rt32) | 50 | 0.001% |
| 3 | C4_cmpneqi | Pd4=!cmp.eq(Rs32,#s10) | 1270 | 0.034% |
| 4 | C4_cmpneq | Pd4=!cmp.eq(Rs32,Rt32) | 1270 | 0.034% |
| 5 | C4_cmplte | Pd4=!cmp.gt(Rs32,Rt32) | 13 | 0.000% |
| 6 | S4_ntstbit_i | Pd4=!tstbit(Rs32,#u5) | 373 | 0.010% |
| 7 | C2_tfrrp | Pd4=Rs32 | 22 | 0.001% |
| 8 | C2_andn | Pd4=and(Pt4,!Ps4) | 1358 | 0.037% |
| 9 | C2_and | Pd4=and(Pt4,Ps4) | 810 | 0.022% |
| 10 | A4_vcmpbeq_any | Pd4=any8(vcmpb.eq(Rss32,Rtt32)) | 12 | 0.000% |
| 11 | C2_bitsclri | Pd4=bitsclr(Rs32,#u6) | 1283 | 0.035% |
| 12 | C2_bitsclr | Pd4=bitsclr(Rs32,Rt32) | 252 | 0.007% |
| 13 | C2_cmpeqi | Pd4=cmp.eq(Rs32,#s10) | 18674 | 0.503% |
| 14 | C2_cmpeq | Pd4=cmp.eq(Rs32,Rt32) | 3857 | 0.104% |
| 15 | C2_cmpeqp | Pd4=cmp.eq(Rss32,Rtt32) | 8 | 0.000% |
| 16 | C2_cmpgti | Pd4=cmp.gt(Rs32,#s10) | 1405 | 0.038% |
| 17 | C2_cmpgt | Pd4=cmp.gt(Rs32,Rt32) | 3292 | 0.089% |
| 18 | C2_cmpgtp | Pd4=cmp.gt(Rss32,Rtt32) | 36 | 0.001% |
| 19 | C2_cmpgtui | Pd4=cmp.gtu(Rs32,#u9) | 2972 | 0.080% |
| 20 | C2_cmpgtu | Pd4=cmp.gtu(Rs32,Rt32) | 822 | 0.022% |
| 21 | C2_cmpgtup | Pd4=cmp.gtu(Rss32,Rtt32) | 295 | 0.008% |
| 22 | A4_cmpbeqi | Pd4=cmpb.eq(Rs32,#u8) | 202 | 0.005% |
| 23 | F2_dfclass | Pd4=dfclass(Rss32,#u5) | 16 | 0.000% |
| 24 | F2_dfcmpeq | Pd4=dfcmp.eq(Rss32,Rtt32) | 84 | 0.002% |
| 25 | F2_dfcmpge | Pd4=dfcmp.ge(Rss32,Rtt32) | 8 | 0.000% |
| 26 | F2_dfcmpgt | Pd4=dfcmp.gt(Rss32,Rtt32) | 6 | 0.000% |
| 27 | F2_dfcmpuo | Pd4=dfcmp.uo(Rss32,Rtt32) | 98 | 0.003% |
| 28 | C2_not | Pd4=not(Ps4) | 3261 | 0.088% |
| 29 | C2_orn | Pd4=or(Pt4,!Ps4) | 12 | 0.000% |
| 30 | S2_tstbit_i | Pd4=tstbit(Rs32,#u5) | 87 | 0.002% |
| 31 | A2_vcmpbeq | Pd4=vcmpb.eq(Rss32,Rtt32) | 3 | 0.000% |
| 32 | C2_xor | Pd4=xor(Ps4,Pt4) | 52 | 0.001% |
| 33 | J4_jumpseti | Rd16=#U6 ; jump #r9:2 | 35 | 0.001% |
| 34 | SA1_seti | Rd16=#u6 | 569 | 0.015% |
| 35 | SA1_tfr | Rd16=Rs16 | 3254 | 0.088% |
| 36 | J4_jumpsetr | Rd16=Rs16 ; jump #r9:2 | 26 | 0.001% |
| 37 | SA1_dec | Rd16=add(Rs16,#-1) | 114 | 0.003% |
| 38 | SA1_inc | Rd16=add(Rs16,#1) | 162 | 0.004% |
| 39 | SA1_addsp | Rd16=add(r29,#u6:2) | 5698 | 0.153% |
| 40 | SL2_loadrb_io | Rd16=memb(Rs16+#u3:0) | 40 | 0.001% |
| 41 | SL1_loadrub_io | Rd16=memub(Rs16+#u4:0) | 208 | 0.006% |
| 42 | SL2_loadruh_io | Rd16=memuh(Rs16+#u3:1) | 217 | 0.006% |
| 43 | SL1_loadri_io | Rd16=memw(Rs16+#u4:2) | 860 | 0.023% |
| 44 | SL2_loadri_sp | Rd16=memw(r29+#u5:2) | 351 | 0.009% |
| 45 | SA1_sxth | Rd16=sxth(Rs16) | 42 | 0.001% |
| 46 | F2_sfrecipa | Rd32,Pe4=sfrecipa(Rs32,Rt32) | 4 | 0.000% |
| 47 | A4_rcmpneqi | Rd32=!cmp.eq(Rs32,#s8) | 5 | 0.000% |
| 48 | A2_tfrsi | Rd32=#s16 | 10134 | 0.273% |
| 49 | M2_mpysip | Rd32=+mpyi(Rs32,#u8) | 12 | 0.000% |
| 50 | A2_tfrcrr | Rd32=Cs32 | 4345 | 0.117% |
| 51 | C2_tfrpr | Rd32=Ps4 | 28 | 0.001% |
| 52 | A2_tfr | Rd32=Rs32 | 40925 | 1.101% |
| 53 | Y2_tfrscrr | Rd32=Ss64 | 539 | 0.015% |
| 54 | A2_abs | Rd32=abs(Rs32) | 56 | 0.002% |
| 55 | A2_addi | Rd32=add(Rs32,#s16) | 427411 | 11.503% |
| 56 | A2_add | Rd32=add(Rs32,Rt32) | 22483 | 0.605% |
| 57 | S4_addaddi | Rd32=add(Rs32,add(Ru32,#s6)) | 10824 | 0.291% |
| 58 | S4_subaddi | Rd32=add(Rs32,sub(#s6,Ru32)) | 9 | 0.000% |
| 59 | A2_addh_l16_ll | Rd32=add(Rt.L32,Rs.L32) | 4 | 0.000% |
| 60 | C4_addipc | Rd32=add(pc,#u6) | 12 | 0.000% |
| 61 | S2_addasl_rrri | Rd32=addasl(Rt32,Rs32,#u3) | 895 | 0.024% |
| 62 | A2_andir | Rd32=and(Rs32,#s10) | 23233 | 0.625% |
| 63 | A2_and | Rd32=and(Rs32,Rt32) | 70 | 0.002% |
| 64 | S2_asl_i_r | Rd32=asl(Rs32,#u5) | 321 | 0.009% |
| 65 | A2_aslh | Rd32=aslh(Rs32) | 2 | 0.000% |
| 66 | S2_asr_i_r | Rd32=asr(Rs32,#u5) | 256 | 0.007% |
| 67 | S2_cl0 | Rd32=cl0(Rs32) | 402 | 0.011% |
| 68 | S2_cl0p | Rd32=cl0(Rss32) | 72 | 0.002% |
| 69 | S2_clrbit_i | Rd32=clrbit(Rs32,#u5) | 27 | 0.001% |
| 70 | F2_conv_d2sf | Rd32=convert_d2sf(Rss32) | 2 | 0.000% |
| 71 | F2_conv_df2w_chop | Rd32=convert_df2w(Rss32):chop | 4 | 0.000% |
| 72 | F2_conv_w2sf | Rd32=convert_w2sf(Rs32) | 4 | 0.000% |
| 73 | S2_ct0 | Rd32=ct0(Rs32) | 3 | 0.000% |
| 74 | S2_extractu | Rd32=extractu(Rs32,#u5,#U5) | 370 | 0.010% |
| 75 | S2_lsr_i_r | Rd32=lsr(Rs32,#u5) | 12132 | 0.327% |
| 76 | A2_max | Rd32=max(Rs32,Rt32) | 8 | 0.000% |
| 77 | A2_maxu | Rd32=maxu(Rs32,Rt32) | 3 | 0.000% |
| 78 | L2_loadrb_io | Rd32=memb(Rs32+#s11:0) | 9007 | 0.242% |
| 79 | L2_loadrb_pi | Rd32=memb(Rx32++#s4:0) | 347 | 0.009% |
| 80 | L2_loadrh_io | Rd32=memh(Rs32+#s11:1) | 77 | 0.002% |
| 81 | L4_loadrh_rr | Rd32=memh(Rs32+Rt32<<#u2) | 10 | 0.000% |
| 82 | L2_loadrub_io | Rd32=memub(Rs32+#s11:0) | 604 | 0.016% |
| 83 | L4_loadrub_rr | Rd32=memub(Rs32+Rt32<<#u2) | 52 | 0.001% |
| 84 | L2_loadrub_pi | Rd32=memub(Rx32++#s4:0) | 73 | 0.002% |
| 85 | L2_loadrubgp | Rd32=memub(gp+#u16:0) | 220 | 0.006% |
| 86 | L2_loadruh_io | Rd32=memuh(Rs32+#s11:1) | 387 | 0.010% |
| 87 | L4_loadruh_rr | Rd32=memuh(Rs32+Rt32<<#u2) | 22 | 0.001% |
| 88 | L4_loadruh_ur | Rd32=memuh(Rt32<<#u2+#U6) | 22 | 0.001% |
| 89 | L4_loadri_ap | Rd32=memw(Re32=#U6) | 3 | 0.000% |
| 90 | L2_loadri_io | Rd32=memw(Rs32+#s11:2) | 42789 | 1.152% |
| 91 | L4_loadri_rr | Rd32=memw(Rs32+Rt32<<#u2) | 6 | 0.000% |
| 92 | L4_loadri_ur | Rd32=memw(Rt32<<#u2+#U6) | 34 | 0.001% |
| 93 | L2_loadrigp | Rd32=memw(gp+#u16:2) | 7046 | 0.190% |
| 94 | L2_loadw_locked | Rd32=memw_locked(Rs32) | 649 | 0.017% |
| 95 | L4_loadw_phys | Rd32=memw_phys(Rs32,Rt32) | 16 | 0.000% |
| 96 | A2_min | Rd32=min(Rt32,Rs32) | 1089 | 0.029% |
| 97 | A2_minu | Rd32=minu(Rt32,Rs32) | 175 | 0.005% |
| 98 | M2_mpyi | Rd32=mpyi(Rs32,Rt32) | 2401 | 0.065% |
| 99 | C2_muxii | Rd32=mux(Pu4,#s8,#S8) | 638 | 0.017% |
| 100 | C2_muxri | Rd32=mux(Pu4,#s8,Rs32) | 2 | 0.000% |
| 101 | C2_muxir | Rd32=mux(Pu4,Rs32,#s8) | 175 | 0.005% |
| 102 | C2_mux | Rd32=mux(Pu4,Rs32,Rt32) | 17 | 0.000% |
| 103 | A2_or | Rd32=or(Rs32,Rt32) | 2631 | 0.071% |
| 104 | S2_setbit_i | Rd32=setbit(Rs32,#u5) | 24 | 0.001% |
| 105 | F2_sffixupd | Rd32=sffixupd(Rs32,Rt32) | 4 | 0.000% |
| 106 | F2_sffixupn | Rd32=sffixupn(Rs32,Rt32) | 4 | 0.000% |
| 107 | A2_subri | Rd32=sub(#s10,Rs32) | 23002 | 0.619% |
| 108 | A2_subh_l16_ll | Rd32=sub(Rt.L32,Rs.L32) | 22 | 0.001% |
| 109 | A2_sub | Rd32=sub(Rt32,Rs32) | 9907 | 0.267% |
| 110 | A2_sxtb | Rd32=sxtb(Rs32) | 201 | 0.005% |
| 111 | A2_sxth | Rd32=sxth(Rs32) | 371 | 0.010% |
| 112 | Y2_tlbp | Rd32=tlbp(Rs32) | 5 | 0.000% |
| 113 | S2_togglebit_i | Rd32=togglebit(Rs32,#u5) | 6 | 0.000% |
| 114 | S2_vsplatrb | Rd32=vsplatb(Rs32) | 4 | 0.000% |
| 115 | A2_svsubh | Rd32=vsubh(Rt32,Rs32) | 216 | 0.006% |
| 116 | A2_xor | Rd32=xor(Rs32,Rt32) | 28 | 0.001% |
| 117 | A2_zxth | Rd32=zxth(Rs32) | 2 | 0.000% |
| 118 | A2_absp | Rdd32=abs(Rss32) | 54 | 0.001% |
| 119 | A2_addp | Rdd32=add(Rss32,Rtt32) | 259 | 0.007% |
| 120 | S2_asl_i_p | Rdd32=asl(Rss32,#u6) | 8 | 0.000% |
| 121 | S2_asr_i_p | Rdd32=asr(Rss32,#u6) | 2 | 0.000% |
| 122 | A2_combineii | Rdd32=combine(#s8,#S8) | 558 | 0.015% |
| 123 | A4_combineii | Rdd32=combine(#s8,#U6) | 22 | 0.001% |
| 124 | A4_combineir | Rdd32=combine(#s8,Rs32) | 2752 | 0.074% |
| 125 | A4_combineri | Rdd32=combine(Rs32,#s8) | 8 | 0.000% |
| 126 | A2_combinew | Rdd32=combine(Rs32,Rt32) | 7687 | 0.207% |
| 127 | F2_conv_d2df | Rdd32=convert_d2df(Rss32) | 8 | 0.000% |
| 128 | F2_conv_sf2df | Rdd32=convert_sf2df(Rs32) | 2 | 0.000% |
| 129 | F2_conv_w2df | Rdd32=convert_w2df(Rs32) | 4 | 0.000% |
| 130 | L4_return | Rdd32=dealloc_return(Rs32):raw | 1401 | 0.038% |
| 131 | L2_deallocframe | Rdd32=deallocframe(Rs32):raw | 178 | 0.005% |
| 132 | F2_dfadd | Rdd32=dfadd(Rss32,Rtt32) | 8 | 0.000% |
| 133 | F2_dfsub | Rdd32=dfsub(Rss32,Rtt32) | 20 | 0.001% |
| 134 | S2_lsl_r_p | Rdd32=lsl(Rss32,Rt32) | 72 | 0.002% |
| 135 | S2_lsr_i_p | Rdd32=lsr(Rss32,#u6) | 522 | 0.014% |
| 136 | L2_loadrd_io | Rdd32=memd(Rs32+#s11:3) | 427 | 0.011% |
| 137 | L4_loadrd_rr | Rdd32=memd(Rs32+Rt32<<#u2) | 29 | 0.001% |
| 138 | L2_loadrd_pi | Rdd32=memd(Rx32++#s4:3) | 8652 | 0.233% |
| 139 | L2_loadrdgp | Rdd32=memd(gp+#u16:3) | 6 | 0.000% |
| 140 | M2_dpmpyuu_s0 | Rdd32=mpyu(Rs32,Rt32) | 50 | 0.001% |
| 141 | A2_negp | Rdd32=neg(Rss32) | 32 | 0.001% |
| 142 | A2_subp | Rdd32=sub(Rtt32,Rss32) | 285 | 0.008% |
| 143 | S2_lsl_r_vw | Rdd32=vlslw(Rss32,Rt32) | 196 | 0.005% |
| 144 | S2_lsr_i_vw | Rdd32=vlsrw(Rss32,#u5) | 394 | 0.011% |
| 145 | C2_vmux | Rdd32=vmux(Pu4,Rss32,Rtt32) | 542 | 0.015% |
| 146 | SA1_combine0i | Rdd8=combine(#0,#u2) | 3 | 0.000% |
| 147 | SA1_combinezr | Rdd8=combine(#0,Rs16) | 180 | 0.005% |
| 148 | SL2_loadrd_sp | Rdd8=memd(r29+#u5:3) | 20443 | 0.550% |
| 149 | A2_tfrih | Rx.H32=#u16 | 4412 | 0.119% |
| 150 | A2_tfril | Rx.L32=#u16 | 250 | 0.007% |
| 151 | SA1_addi | Rx16=add(Rx16,#s7) | 28 | 0.001% |
| 152 | S2_asr_i_r_and | Rx32&=asr(Rs32,#u5) | 2 | 0.000% |
| 153 | M2_acci | Rx32+=add(Rs32,Rt32) | 2198 | 0.059% |
| 154 | S2_asl_i_r_acc | Rx32+=asl(Rs32,#u5) | 8 | 0.000% |
| 155 | M2_maci | Rx32+=mpyi(Rs32,Rt32) | 1120 | 0.030% |
| 156 | F2_sffma_lib | Rx32+=sfmpy(Rs32,Rt32):lib | 16 | 0.000% |
| 157 | F2_sffma_sc | Rx32+=sfmpy(Rs32,Rt32,Pu4):scale | 4 | 0.000% |
| 158 | S2_asl_i_r_nac | Rx32-=asl(Rs32,#u5) | 4318 | 0.116% |
| 159 | S2_asr_i_r_nac | Rx32-=asr(Rs32,#u5) | 2 | 0.000% |
| 160 | M2_mnaci | Rx32-=mpyi(Rs32,Rt32) | 26 | 0.001% |
| 161 | F2_sffms_lib | Rx32-=sfmpy(Rs32,Rt32):lib | 16 | 0.000% |
| 162 | S4_addi_asl_ri | Rx32=add(#u8,asl(Rx32,#U5)) | 402 | 0.011% |
| 163 | S2_insert | Rx32=insert(Rs32,#u5,#U5) | 22 | 0.001% |
| 164 | S4_or_andix | Rx32=or(Ru32,and(Rx32,#s10)) | 62 | 0.002% |
| 165 | S2_asl_i_r_or | Rx32|=asl(Rs32,#u5) | 33 | 0.001% |
| 166 | S2_lsr_i_p_acc | Rxx32+=lsr(Rss32,#u6) | 16 | 0.000% |
| 167 | M2_dpmpyuu_acc_s0 | Rxx32+=mpyu(Rs32,Rt32) | 8 | 0.000% |
| 168 | S2_insertp | Rxx32=insert(Rss32,#u6,#U6) | 16 | 0.000% |
| 169 | Y2_tfrsrcr | Sd64=Rs32 | 37 | 0.001% |
| 170 | V6_vshuffeb | Vd32.b=vshuffe(Vu32.b,Vv32.b) | 72900 | 1.962% |
| 171 | V6_vL32b_cur_ai | Vd32.cur=vmem(Rt32+#s4) | 18225 | 0.490% |
| 172 | V6_vabsdiffuh | Vd32.uh=vabsdiff(Vu32.uh,Vv32.uh) | 291600 | 7.848% |
| 173 | V6_vminuh | Vd32.uh=vmin(Vu32.uh,Vv32.uh) | 145800 | 3.924% |
| 174 | V6_vassign | Vd32=Vu32 | 218700 | 5.886% |
| 175 | V6_vL32b_ai | Vd32=vmem(Rt32+#s4) | 36450 | 0.981% |
| 176 | V6_vL32Ub_pi | Vd32=vmemu(Rx32++#s3) | 218700 | 5.886% |
| 177 | V6_lvsplatw | Vd32=vsplat(Rt32) | 18 | 0.000% |
| 178 | V6_vaddh_dv | Vdd32.h=vadd(Vuu32.h,Vvv32.h) | 218700 | 5.886% |
| 179 | V6_vmpabus | Vdd32.h=vmpa(Vuu32.ub,Rt32.b) | 145800 | 3.924% |
| 180 | V6_vtmpybus | Vdd32.h=vtmpy(Vuu32.ub,Rt32.b) | 109350 | 2.943% |
| 181 | V6_vzb | Vdd32.uh=vzxt(Vu32.ub) | 145800 | 3.924% |
| 182 | V6_vcombine | Vdd32=vcombine(Vu32,Vv32) | 90 | 0.002% |
| 183 | SS2_allocframe | allocframe(#u5:3) | 5349 | 0.144% |
| 184 | S2_allocframe | allocframe(Rx32,#u11:3):raw | 2391 | 0.064% |
| 185 | J2_call | call #r22:2 | 20263 | 0.545% |
| 186 | J2_callr | callr Rs32 | 189 | 0.005% |
| 187 | Y2_crswap0 | crswap(Rx32,sgp0) | 8686 | 0.234% |
| 188 | Y2_cswi | cswi(Rs32) | 1 | 0.000% |
| 189 | Y2_dccleaninva | dccleaninva(Rs32) | 211508 | 5.692% |
| 190 | Y2_dcfetchbo | dcfetch(Rs32+#u11:3) | 1270 | 0.034% |
| 191 | Y2_dcinva | dcinva(Rs32) | 64800 | 1.744% |
| 192 | Y2_dczeroa | dczeroa(Rs32) | 65440 | 1.761% |
| 193 | SL2_return | dealloc_return | 5943 | 0.160% |
| 194 | SL2_deallocframe | deallocframe | 215 | 0.006% |
| 195 | J2_endloop0 | endloop0 | 361197 | 9.721% |
| 196 | J2_endloop1 | endloop1 | 810 | 0.022% |
| 197 | L2_ploadrif_io | if (!Pt4) Rd32=memw(Rs32+#u6:2) | 5 | 0.000% |
| 198 | L2_ploadrdf_io | if (!Pt4) Rdd32=memd(Rs32+#u6:3) | 3 | 0.000% |
| 199 | L2_ploadrbfnew_io | if (!Pt4.new) Rd32=memb(Rs32+#u6:0) | 174 | 0.005% |
| 200 | L2_ploadrhfnew_io | if (!Pt4.new) Rd32=memh(Rs32+#u6:1) | 28 | 0.001% |
| 201 | L4_ploadrubfnew_abs | if (!Pt4.new) Rd32=memub(#u6) | 182 | 0.005% |
| 202 | L2_ploadrubfnew_io | if (!Pt4.new) Rd32=memub(Rs32+#u6:0) | 5 | 0.000% |
| 203 | L2_ploadrubfnew_pi | if (!Pt4.new) Rd32=memub(Rx32++#s4:0) | 10 | 0.000% |
| 204 | L2_ploadruhfnew_io | if (!Pt4.new) Rd32=memuh(Rs32+#u6:1) | 15 | 0.000% |
| 205 | L2_ploadrifnew_io | if (!Pt4.new) Rd32=memw(Rs32+#u6:2) | 20 | 0.001% |
| 206 | L2_ploadrdfnew_io | if (!Pt4.new) Rdd32=memd(Rs32+#u6:3) | 34 | 0.001% |
| 207 | C2_cmoveif | if (!Pu4) Rd32=#s12 | 180 | 0.005% |
| 208 | A2_paddif | if (!Pu4) Rd32=add(Rs32,#s8) | 27 | 0.001% |
| 209 | A2_porf | if (!Pu4) Rd32=or(Rs32,Rt32) | 8 | 0.000% |
| 210 | A2_psubf | if (!Pu4) Rd32=sub(Rt32,Rs32) | 174 | 0.005% |
| 211 | C2_ccombinewf | if (!Pu4) Rdd32=combine(Rs32,Rt32) | 2 | 0.000% |
| 212 | J2_jumpf | if (!Pu4) jump:nt #r15:2 | 1724 | 0.046% |
| 213 | J2_jumprf | if (!Pu4) jumpr:nt Rs32 | 1509 | 0.041% |
| 214 | C2_cmovenewif | if (!Pu4.new) Rd32=#s12 | 429 | 0.012% |
| 215 | A2_paddifnew | if (!Pu4.new) Rd32=add(Rs32,#s8) | 1618 | 0.044% |
| 216 | A2_paddfnew | if (!Pu4.new) Rd32=add(Rs32,Rt32) | 3848 | 0.104% |
| 217 | A2_psubfnew | if (!Pu4.new) Rd32=sub(Rt32,Rs32) | 394 | 0.011% |
| 218 | C2_ccombinewnewf | if (!Pu4.new) Rdd32=combine(Rs32,Rt32) | 16 | 0.000% |
| 219 | J2_jumpfnew | if (!Pu4.new) jump:nt #r15:2 | 1046 | 0.028% |
| 220 | J2_jumpfnewpt | if (!Pu4.new) jump:t #r15:2 | 224 | 0.006% |
| 221 | J2_jumprfnew | if (!Pu4.new) jumpr:nt Rs32 | 19 | 0.001% |
| 222 | L4_ploadrdf_rr | if (!Pv4) Rdd32=memd(Rs32+Rt32<<#u2) | 2 | 0.000% |
| 223 | S4_storeirif_io | if (!Pv4) memw(Rs32+#u6:2)=#S6 | 2 | 0.000% |
| 224 | S2_pstorerif_io | if (!Pv4) memw(Rs32+#u6:2)=Rt32 | 2 | 0.000% |
| 225 | L4_ploadrdfnew_rr | if (!Pv4.new) Rdd32=memd(Rs32+Rt32<<#u2) | 12 | 0.000% |
| 226 | S4_storeirifnew_io | if (!Pv4.new) memw(Rs32+#u6:2)=#S6 | 14 | 0.000% |
| 227 | S4_pstorerifnew_io | if (!Pv4.new) memw(Rs32+#u6:2)=Rt32 | 178 | 0.005% |
| 228 | S4_pstorerifnew_rr | if (!Pv4.new) memw(Rs32+Ru32<<#u2)=Rt32 | 6 | 0.000% |
| 229 | J4_cmpeqi_f_jumpnv_nt | if (!cmp.eq(Ns8.new,#U5)) jump:nt #r9:2 | 11 | 0.000% |
| 230 | J4_cmpeqi_f_jumpnv_t | if (!cmp.eq(Ns8.new,#U5)) jump:t #r9:2 | 55 | 0.001% |
| 231 | J4_cmpeq_f_jumpnv_t | if (!cmp.eq(Ns8.new,Rt32)) jump:t #r9:2 | 9 | 0.000% |
| 232 | J4_cmpgtn1_f_jumpnv_nt | if (!cmp.gt(Ns8.new,#-1)) jump:nt #r9:2 | 15 | 0.000% |
| 233 | J4_cmpgti_f_jumpnv_nt | if (!cmp.gt(Ns8.new,#U5)) jump:nt #r9:2 | 248 | 0.007% |
| 234 | J4_cmpgti_f_jumpnv_t | if (!cmp.gt(Ns8.new,#U5)) jump:t #r9:2 | 202 | 0.005% |
| 235 | J4_cmpgt_f_jumpnv_t | if (!cmp.gt(Ns8.new,Rt32)) jump:t #r9:2 | 26 | 0.001% |
| 236 | J4_cmplt_f_jumpnv_nt | if (!cmp.gt(Rt32,Ns8.new)) jump:nt #r9:2 | 22 | 0.001% |
| 237 | J4_cmplt_f_jumpnv_t | if (!cmp.gt(Rt32,Ns8.new)) jump:t #r9:2 | 2 | 0.000% |
| 238 | J4_cmpgtu_f_jumpnv_t | if (!cmp.gtu(Ns8.new,Rt32)) jump:t #r9:2 | 11 | 0.000% |
| 239 | J4_cmpltu_f_jumpnv_nt | if (!cmp.gtu(Rt32,Ns8.new)) jump:nt #r9:2 | 7 | 0.000% |
| 240 | J4_cmpltu_f_jumpnv_t | if (!cmp.gtu(Rt32,Ns8.new)) jump:t #r9:2 | 371 | 0.010% |
| 241 | L2_ploadrit_io | if (Pt4) Rd32=memw(Rs32+#u6:2) | 4860 | 0.131% |
| 242 | L2_ploadrbtnew_io | if (Pt4.new) Rd32=memb(Rs32+#u6:0) | 8 | 0.000% |
| 243 | L2_ploadrhtnew_io | if (Pt4.new) Rd32=memh(Rs32+#u6:1) | 6 | 0.000% |
| 244 | L2_ploadruhtnew_io | if (Pt4.new) Rd32=memuh(Rs32+#u6:1) | 8 | 0.000% |
| 245 | L4_ploadritnew_abs | if (Pt4.new) Rd32=memw(#u6) | 4 | 0.000% |
| 246 | L2_ploadritnew_io | if (Pt4.new) Rd32=memw(Rs32+#u6:2) | 870 | 0.023% |
| 247 | L2_ploadrdtnew_io | if (Pt4.new) Rdd32=memd(Rs32+#u6:3) | 8 | 0.000% |
| 248 | C2_cmoveit | if (Pu4) Rd32=#s12 | 24 | 0.001% |
| 249 | A2_paddit | if (Pu4) Rd32=add(Rs32,#s8) | 2436 | 0.066% |
| 250 | A2_paddt | if (Pu4) Rd32=add(Rs32,Rt32) | 2 | 0.000% |
| 251 | C2_ccombinewt | if (Pu4) Rdd32=combine(Rs32,Rt32) | 1622 | 0.044% |
| 252 | J2_jumpt | if (Pu4) jump:nt #r15:2 | 7531 | 0.203% |
| 253 | J2_jumprt | if (Pu4) jumpr:nt Rs32 | 35 | 0.001% |
| 254 | C2_cmovenewit | if (Pu4.new) Rd32=#s12 | 258 | 0.007% |
| 255 | A2_padditnew | if (Pu4.new) Rd32=add(Rs32,#s8) | 1490 | 0.040% |
| 256 | A2_psubtnew | if (Pu4.new) Rd32=sub(Rt32,Rs32) | 2 | 0.000% |
| 257 | A4_psxthtnew | if (Pu4.new) Rd32=sxth(Rs32) | 2 | 0.000% |
| 258 | J2_jumptnew | if (Pu4.new) jump:nt #r15:2 | 6442 | 0.173% |
| 259 | J2_jumptnewpt | if (Pu4.new) jump:t #r15:2 | 370 | 0.010% |
| 260 | S4_pstorerbt_rr | if (Pv4) memb(Rs32+Ru32<<#u2)=Rt32 | 2 | 0.000% |
| 261 | S2_pstorerbt_pi | if (Pv4) memb(Rx32++#s4:0)=Rt32 | 255 | 0.007% |
| 262 | S2_pstorerdt_io | if (Pv4) memd(Rs32+#u6:3)=Rtt32 | 2 | 0.000% |
| 263 | S2_pstorerdt_pi | if (Pv4) memd(Rx32++#s4:3)=Rtt32 | 8652 | 0.233% |
| 264 | S2_pstorerhnewt_io | if (Pv4) memh(Rs32+#u6:1)=Nt8.new | 20 | 0.001% |
| 265 | S2_pstorerht_io | if (Pv4) memh(Rs32+#u6:1)=Rt32 | 2 | 0.000% |
| 266 | L4_ploadrbtnew_rr | if (Pv4.new) Rd32=memb(Rs32+Rt32<<#u2) | 2 | 0.000% |
| 267 | S4_pstorerbtnew_io | if (Pv4.new) memb(Rs32+#u6:0)=Rt32 | 12 | 0.000% |
| 268 | S2_pstorerbtnew_pi | if (Pv4.new) memb(Rx32++#s4:0)=Rt32 | 1 | 0.000% |
| 269 | S4_pstorerdtnew_io | if (Pv4.new) memd(Rs32+#u6:3)=Rtt32 | 4 | 0.000% |
| 270 | S2_pstorerdtnew_pi | if (Pv4.new) memd(Rx32++#s4:3)=Rtt32 | 8 | 0.000% |
| 271 | S4_storeirhtnew_io | if (Pv4.new) memh(Rs32+#u6:1)=#S6 | 23 | 0.001% |
| 272 | S2_pstorerhtnew_pi | if (Pv4.new) memh(Rx32++#s4:1)=Rt32 | 2 | 0.000% |
| 273 | S4_storeiritnew_io | if (Pv4.new) memw(Rs32+#u6:2)=#S6 | 522 | 0.014% |
| 274 | S4_pstorerinewtnew_io | if (Pv4.new) memw(Rs32+#u6:2)=Nt8.new | 2 | 0.000% |
| 275 | S4_pstoreritnew_io | if (Pv4.new) memw(Rs32+#u6:2)=Rt32 | 2 | 0.000% |
| 276 | S2_pstoreritnew_pi | if (Pv4.new) memw(Rx32++#s4:2)=Rt32 | 2 | 0.000% |
| 277 | J2_jumprzpt | if (Rs32!=#0) jump:t #r13:2 | 127 | 0.003% |
| 278 | J4_cmpeqn1_t_jumpnv_nt | if (cmp.eq(Ns8.new,#-1)) jump:nt #r9:2 | 6 | 0.000% |
| 279 | J4_cmpeqi_t_jumpnv_nt | if (cmp.eq(Ns8.new,#U5)) jump:nt #r9:2 | 386 | 0.010% |
| 280 | J4_cmpeqi_t_jumpnv_t | if (cmp.eq(Ns8.new,#U5)) jump:t #r9:2 | 6 | 0.000% |
| 281 | J4_cmpeq_t_jumpnv_nt | if (cmp.eq(Ns8.new,Rt32)) jump:nt #r9:2 | 321 | 0.009% |
| 282 | J4_cmpeq_t_jumpnv_t | if (cmp.eq(Ns8.new,Rt32)) jump:t #r9:2 | 14 | 0.000% |
| 283 | J4_cmpgti_t_jumpnv_t | if (cmp.gt(Ns8.new,#U5)) jump:t #r9:2 | 2 | 0.000% |
| 284 | J4_cmplt_t_jumpnv_t | if (cmp.gt(Rt32,Ns8.new)) jump:t #r9:2 | 20 | 0.001% |
| 285 | J4_cmpgtu_t_jumpnv_t | if (cmp.gtu(Ns8.new,Rt32)) jump:t #r9:2 | 176 | 0.005% |
| 286 | J4_cmpltu_t_jumpnv_t | if (cmp.gtu(Rt32,Ns8.new)) jump:t #r9:2 | 13 | 0.000% |
| 287 | SL2_jumpr31_t | if (p0) jumpr r31 | 197 | 0.005% |
| 288 | SA1_clrtnew | if (p0.new) Rd16=#0 | 1088 | 0.029% |
| 289 | SL2_jumpr31_tnew | if (p0.new) jumpr:nt r31 | 2346 | 0.063% |
| 290 | Y2_isync | isync | 164 | 0.004% |
| 291 | J2_jump | jump #r22:2 | 14926 | 0.402% |
| 292 | J2_jumpr | jumpr Rs32 | 13059 | 0.351% |
| 293 | SL2_jumpr31 | jumpr r31 | 41 | 0.001% |
| 294 | Y2_l2kill | l2kill | 1 | 0.000% |
| 295 | J2_loop0i | loop0(#r7:2,#U10) | 3 | 0.000% |
| 296 | J2_loop0r | loop0(#r7:2,Rs32) | 12230 | 0.329% |
| 297 | J2_loop1r | loop1(#r7:2,Rs32) | 6 | 0.000% |
| 298 | SS2_storebi0 | memb(Rs16+#u4:0)=#0 | 6 | 0.000% |
| 299 | S2_storerbnew_io | memb(Rs32+#s11:0)=Nt8.new | 34 | 0.001% |
| 300 | S2_storerb_io | memb(Rs32+#s11:0)=Rt32 | 212 | 0.006% |
| 301 | S4_storeirb_io | memb(Rs32+#u6:0)=#S8 | 8 | 0.000% |
| 302 | S4_storerb_rr | memb(Rs32+Ru32<<#u2)=Rt32 | 2 | 0.000% |
| 303 | S2_storerbnew_pi | memb(Rx32++#s4:0)=Nt8.new | 75 | 0.002% |
| 304 | S2_storerb_pi | memb(Rx32++#s4:0)=Rt32 | 12 | 0.000% |
| 305 | S2_storerbgp | memb(gp+#u16:0)=Rt32 | 2 | 0.000% |
| 306 | S2_storerd_io | memd(Rs32+#s11:3)=Rtt32 | 13949 | 0.375% |
| 307 | S4_storerd_rr | memd(Rs32+Ru32<<#u2)=Rtt32 | 16 | 0.000% |
| 308 | SS2_stored_sp | memd(r29+#s6:3)=Rtt8 | 8116 | 0.218% |
| 309 | S2_storerhnew_io | memh(Rs32+#s11:1)=Nt8.new | 117 | 0.003% |
| 310 | S2_storerh_io | memh(Rs32+#s11:1)=Rt32 | 80 | 0.002% |
| 311 | S4_storeirh_io | memh(Rs32+#u6:1)=#S8 | 200 | 0.005% |
| 312 | L4_ior_memoph_io | memh(Rs32+#u6:1)=setbit(#U5) | 1 | 0.000% |
| 313 | L4_or_memoph_io | memh(Rs32+#u6:1)|=Rt32 | 5 | 0.000% |
| 314 | S4_storerhnew_rr | memh(Rs32+Ru32<<#u2)=Nt8.new | 20 | 0.001% |
| 315 | SS2_storewi0 | memw(Rs16+#u4:2)=#0 | 5 | 0.000% |
| 316 | SS1_storew_io | memw(Rs16+#u4:2)=Rt16 | 51 | 0.001% |
| 317 | S2_storerinew_io | memw(Rs32+#s11:2)=Nt8.new | 6020 | 0.162% |
| 318 | S2_storeri_io | memw(Rs32+#s11:2)=Rt32 | 10229 | 0.275% |
| 319 | L4_add_memopw_io | memw(Rs32+#u6:2)+=Rt32 | 175 | 0.005% |
| 320 | S4_storeiri_io | memw(Rs32+#u6:2)=#S8 | 285 | 0.008% |
| 321 | S4_storeri_rr | memw(Rs32+Ru32<<#u2)=Rt32 | 7 | 0.000% |
| 322 | S2_storerinewgp | memw(gp+#u16:2)=Nt8.new | 5 | 0.000% |
| 323 | S2_storerigp | memw(gp+#u16:2)=Rt32 | 11 | 0.000% |
| 324 | SS2_storew_sp | memw(r29+#u5:2)=Rt16 | 6518 | 0.175% |
| 325 | S2_storew_locked | memw_locked(Rs32,Pd4)=Rt32 | 449 | 0.012% |
| 326 | A2_nop | nop | 323491 | 8.706% |
| 327 | J4_cmpeqn1_fp0_jump_t | p0=cmp.eq(Rs16,#-1); if (!p0.new) jump:t #r9:2 | 4328 | 0.116% |
| 328 | J4_cmpeqi_fp0_jump_nt | p0=cmp.eq(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 24 | 0.001% |
| 329 | J4_cmpeqi_fp0_jump_t | p0=cmp.eq(Rs16,#U5); if (!p0.new) jump:t #r9:2 | 2847 | 0.077% |
| 330 | J4_cmpeqi_tp0_jump_nt | p0=cmp.eq(Rs16,#U5); if (p0.new) jump:nt #r9:2 | 15811 | 0.426% |
| 331 | J4_cmpeqi_tp0_jump_t | p0=cmp.eq(Rs16,#U5); if (p0.new) jump:t #r9:2 | 179 | 0.005% |
| 332 | SA1_cmpeqi | p0=cmp.eq(Rs16,#u2) | 2520 | 0.068% |
| 333 | J4_cmpeq_fp0_jump_t | p0=cmp.eq(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 40 | 0.001% |
| 334 | J4_cmpeq_tp0_jump_nt | p0=cmp.eq(Rs16,Rt16); if (p0.new) jump:nt #r9:2 | 3 | 0.000% |
| 335 | J4_cmpeq_tp0_jump_t | p0=cmp.eq(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 6 | 0.000% |
| 336 | J4_cmpgtn1_fp0_jump_nt | p0=cmp.gt(Rs16,#-1); if (!p0.new) jump:nt #r9:2 | 14 | 0.000% |
| 337 | J4_cmpgtn1_fp0_jump_t | p0=cmp.gt(Rs16,#-1); if (!p0.new) jump:t #r9:2 | 2 | 0.000% |
| 338 | J4_cmpgtn1_tp0_jump_t | p0=cmp.gt(Rs16,#-1); if (p0.new) jump:t #r9:2 | 14 | 0.000% |
| 339 | J4_cmpgti_fp0_jump_nt | p0=cmp.gt(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 92 | 0.002% |
| 340 | J4_cmpgti_fp0_jump_t | p0=cmp.gt(Rs16,#U5); if (!p0.new) jump:t #r9:2 | 4 | 0.000% |
| 341 | J4_cmpgti_tp0_jump_nt | p0=cmp.gt(Rs16,#U5); if (p0.new) jump:nt #r9:2 | 29 | 0.001% |
| 342 | J4_cmpgti_tp0_jump_t | p0=cmp.gt(Rs16,#U5); if (p0.new) jump:t #r9:2 | 228 | 0.006% |
| 343 | J4_cmpgt_fp0_jump_nt | p0=cmp.gt(Rs16,Rt16); if (!p0.new) jump:nt #r9:2 | 8 | 0.000% |
| 344 | J4_cmpgt_fp0_jump_t | p0=cmp.gt(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 37 | 0.001% |
| 345 | J4_cmpgt_tp0_jump_t | p0=cmp.gt(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 6 | 0.000% |
| 346 | J4_cmpgtui_fp0_jump_nt | p0=cmp.gtu(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 9 | 0.000% |
| 347 | J4_cmpgtui_tp0_jump_t | p0=cmp.gtu(Rs16,#U5); if (p0.new) jump:t #r9:2 | 29 | 0.001% |
| 348 | J4_cmpgtu_fp0_jump_nt | p0=cmp.gtu(Rs16,Rt16); if (!p0.new) jump:nt #r9:2 | 1 | 0.000% |
| 349 | J4_cmpgtu_fp0_jump_t | p0=cmp.gtu(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 2 | 0.000% |
| 350 | J4_cmpgtu_tp0_jump_nt | p0=cmp.gtu(Rs16,Rt16); if (p0.new) jump:nt #r9:2 | 11 | 0.000% |
| 351 | J4_cmpgtu_tp0_jump_t | p0=cmp.gtu(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 9 | 0.000% |
| 352 | J4_tstbit0_fp0_jump_t | p0=tstbit(Rs16,#0); if (!p0.new) jump:t #r9:2 | 14 | 0.000% |
| 353 | J4_cmpeqi_tp1_jump_nt | p1=cmp.eq(Rs16,#U5); if (p1.new) jump:nt #r9:2 | 17 | 0.000% |
| 354 | J4_cmpeq_tp1_jump_nt | p1=cmp.eq(Rs16,Rt16); if (p1.new) jump:nt #r9:2 | 7 | 0.000% |
| 355 | J4_cmpgti_tp1_jump_t | p1=cmp.gt(Rs16,#U5); if (p1.new) jump:t #r9:2 | 2 | 0.000% |
| 356 | J4_cmpgtu_tp1_jump_t | p1=cmp.gtu(Rs16,Rt16); if (p1.new) jump:t #r9:2 | 6 | 0.000% |
| 357 | J2_ploop1sr | p3=sp1loop0(#r7:2,Rs32) | 1270 | 0.034% |
| 358 | J2_rte | rte | 4343 | 0.117% |
| 359 | Y2_syncht | syncht | 2 | 0.000% |
| 360 | Y2_tlbw | tlbw(Rss32,Rt32) | 133 | 0.004% |
| 361 | J2_trap0 | trap0(#u8) | 4334 | 0.117% |
| 362 | V6_vS32b_new_ai | vmem(Rt32+#s4)=Os8.new | 6 | 0.000% |
| 363 | V6_vS32Ub_pi | vmemu(Rx32++#s3)=Vs32 | 72901 | 1.962% |
| Total Count | 3715623 | 100% | ||
| # | Tag | Syntax | Count | Pct |
|---|---|---|---|---|
| 0 | A2_abs | Rd32=abs(Rs32) | 56 | 0.002% |
| 1 | A2_absp | Rdd32=abs(Rss32) | 54 | 0.001% |
| 2 | A2_add | Rd32=add(Rs32,Rt32) | 22483 | 0.605% |
| 3 | A2_addh_l16_ll | Rd32=add(Rt.L32,Rs.L32) | 4 | 0.000% |
| 4 | A2_addi | Rd32=add(Rs32,#s16) | 427411 | 11.503% |
| 5 | A2_addp | Rdd32=add(Rss32,Rtt32) | 259 | 0.007% |
| 6 | A2_and | Rd32=and(Rs32,Rt32) | 70 | 0.002% |
| 7 | A2_andir | Rd32=and(Rs32,#s10) | 23233 | 0.625% |
| 8 | A2_aslh | Rd32=aslh(Rs32) | 2 | 0.000% |
| 9 | A2_combineii | Rdd32=combine(#s8,#S8) | 558 | 0.015% |
| 10 | A2_combinew | Rdd32=combine(Rs32,Rt32) | 7687 | 0.207% |
| 11 | A2_max | Rd32=max(Rs32,Rt32) | 8 | 0.000% |
| 12 | A2_maxu | Rd32=maxu(Rs32,Rt32) | 3 | 0.000% |
| 13 | A2_min | Rd32=min(Rt32,Rs32) | 1089 | 0.029% |
| 14 | A2_minu | Rd32=minu(Rt32,Rs32) | 175 | 0.005% |
| 15 | A2_negp | Rdd32=neg(Rss32) | 32 | 0.001% |
| 16 | A2_nop | nop | 323491 | 8.706% |
| 17 | A2_or | Rd32=or(Rs32,Rt32) | 2631 | 0.071% |
| 18 | A2_paddfnew | if (!Pu4.new) Rd32=add(Rs32,Rt32) | 3848 | 0.104% |
| 19 | A2_paddif | if (!Pu4) Rd32=add(Rs32,#s8) | 27 | 0.001% |
| 20 | A2_paddifnew | if (!Pu4.new) Rd32=add(Rs32,#s8) | 1618 | 0.044% |
| 21 | A2_paddit | if (Pu4) Rd32=add(Rs32,#s8) | 2436 | 0.066% |
| 22 | A2_padditnew | if (Pu4.new) Rd32=add(Rs32,#s8) | 1490 | 0.040% |
| 23 | A2_paddt | if (Pu4) Rd32=add(Rs32,Rt32) | 2 | 0.000% |
| 24 | A2_porf | if (!Pu4) Rd32=or(Rs32,Rt32) | 8 | 0.000% |
| 25 | A2_psubf | if (!Pu4) Rd32=sub(Rt32,Rs32) | 174 | 0.005% |
| 26 | A2_psubfnew | if (!Pu4.new) Rd32=sub(Rt32,Rs32) | 394 | 0.011% |
| 27 | A2_psubtnew | if (Pu4.new) Rd32=sub(Rt32,Rs32) | 2 | 0.000% |
| 28 | A2_sub | Rd32=sub(Rt32,Rs32) | 9907 | 0.267% |
| 29 | A2_subh_l16_ll | Rd32=sub(Rt.L32,Rs.L32) | 22 | 0.001% |
| 30 | A2_subp | Rdd32=sub(Rtt32,Rss32) | 285 | 0.008% |
| 31 | A2_subri | Rd32=sub(#s10,Rs32) | 23002 | 0.619% |
| 32 | A2_svsubh | Rd32=vsubh(Rt32,Rs32) | 216 | 0.006% |
| 33 | A2_sxtb | Rd32=sxtb(Rs32) | 201 | 0.005% |
| 34 | A2_sxth | Rd32=sxth(Rs32) | 371 | 0.010% |
| 35 | A2_tfr | Rd32=Rs32 | 40925 | 1.101% |
| 36 | A2_tfrcrr | Rd32=Cs32 | 4345 | 0.117% |
| 37 | A2_tfrih | Rx.H32=#u16 | 4412 | 0.119% |
| 38 | A2_tfril | Rx.L32=#u16 | 250 | 0.007% |
| 39 | A2_tfrrcr | Cd32=Rs32 | 4345 | 0.117% |
| 40 | A2_tfrsi | Rd32=#s16 | 10134 | 0.273% |
| 41 | A2_vcmpbeq | Pd4=vcmpb.eq(Rss32,Rtt32) | 3 | 0.000% |
| 42 | A2_xor | Rd32=xor(Rs32,Rt32) | 28 | 0.001% |
| 43 | A2_zxth | Rd32=zxth(Rs32) | 2 | 0.000% |
| 44 | A4_cmpbeqi | Pd4=cmpb.eq(Rs32,#u8) | 202 | 0.005% |
| 45 | A4_combineii | Rdd32=combine(#s8,#U6) | 22 | 0.001% |
| 46 | A4_combineir | Rdd32=combine(#s8,Rs32) | 2752 | 0.074% |
| 47 | A4_combineri | Rdd32=combine(Rs32,#s8) | 8 | 0.000% |
| 48 | A4_psxthtnew | if (Pu4.new) Rd32=sxth(Rs32) | 2 | 0.000% |
| 49 | A4_rcmpneqi | Rd32=!cmp.eq(Rs32,#s8) | 5 | 0.000% |
| 50 | A4_vcmpbeq_any | Pd4=any8(vcmpb.eq(Rss32,Rtt32)) | 12 | 0.000% |
| 51 | C2_and | Pd4=and(Pt4,Ps4) | 810 | 0.022% |
| 52 | C2_andn | Pd4=and(Pt4,!Ps4) | 1358 | 0.037% |
| 53 | C2_bitsclr | Pd4=bitsclr(Rs32,Rt32) | 252 | 0.007% |
| 54 | C2_bitsclri | Pd4=bitsclr(Rs32,#u6) | 1283 | 0.035% |
| 55 | C2_ccombinewf | if (!Pu4) Rdd32=combine(Rs32,Rt32) | 2 | 0.000% |
| 56 | C2_ccombinewnewf | if (!Pu4.new) Rdd32=combine(Rs32,Rt32) | 16 | 0.000% |
| 57 | C2_ccombinewt | if (Pu4) Rdd32=combine(Rs32,Rt32) | 1622 | 0.044% |
| 58 | C2_cmoveif | if (!Pu4) Rd32=#s12 | 180 | 0.005% |
| 59 | C2_cmoveit | if (Pu4) Rd32=#s12 | 24 | 0.001% |
| 60 | C2_cmovenewif | if (!Pu4.new) Rd32=#s12 | 429 | 0.012% |
| 61 | C2_cmovenewit | if (Pu4.new) Rd32=#s12 | 258 | 0.007% |
| 62 | C2_cmpeq | Pd4=cmp.eq(Rs32,Rt32) | 3857 | 0.104% |
| 63 | C2_cmpeqi | Pd4=cmp.eq(Rs32,#s10) | 18674 | 0.503% |
| 64 | C2_cmpeqp | Pd4=cmp.eq(Rss32,Rtt32) | 8 | 0.000% |
| 65 | C2_cmpgt | Pd4=cmp.gt(Rs32,Rt32) | 3292 | 0.089% |
| 66 | C2_cmpgti | Pd4=cmp.gt(Rs32,#s10) | 1405 | 0.038% |
| 67 | C2_cmpgtp | Pd4=cmp.gt(Rss32,Rtt32) | 36 | 0.001% |
| 68 | C2_cmpgtu | Pd4=cmp.gtu(Rs32,Rt32) | 822 | 0.022% |
| 69 | C2_cmpgtui | Pd4=cmp.gtu(Rs32,#u9) | 2972 | 0.080% |
| 70 | C2_cmpgtup | Pd4=cmp.gtu(Rss32,Rtt32) | 295 | 0.008% |
| 71 | C2_mux | Rd32=mux(Pu4,Rs32,Rt32) | 17 | 0.000% |
| 72 | C2_muxii | Rd32=mux(Pu4,#s8,#S8) | 638 | 0.017% |
| 73 | C2_muxir | Rd32=mux(Pu4,Rs32,#s8) | 175 | 0.005% |
| 74 | C2_muxri | Rd32=mux(Pu4,#s8,Rs32) | 2 | 0.000% |
| 75 | C2_not | Pd4=not(Ps4) | 3261 | 0.088% |
| 76 | C2_orn | Pd4=or(Pt4,!Ps4) | 12 | 0.000% |
| 77 | C2_tfrpr | Rd32=Ps4 | 28 | 0.001% |
| 78 | C2_tfrrp | Pd4=Rs32 | 22 | 0.001% |
| 79 | C2_vmux | Rdd32=vmux(Pu4,Rss32,Rtt32) | 542 | 0.015% |
| 80 | C2_xor | Pd4=xor(Ps4,Pt4) | 52 | 0.001% |
| 81 | C4_addipc | Rd32=add(pc,#u6) | 12 | 0.000% |
| 82 | C4_cmplte | Pd4=!cmp.gt(Rs32,Rt32) | 13 | 0.000% |
| 83 | C4_cmpneq | Pd4=!cmp.eq(Rs32,Rt32) | 1270 | 0.034% |
| 84 | C4_cmpneqi | Pd4=!cmp.eq(Rs32,#s10) | 1270 | 0.034% |
| 85 | C4_nbitsclr | Pd4=!bitsclr(Rs32,Rt32) | 50 | 0.001% |
| 86 | C4_nbitsclri | Pd4=!bitsclr(Rs32,#u6) | 20 | 0.001% |
| 87 | F2_conv_d2df | Rdd32=convert_d2df(Rss32) | 8 | 0.000% |
| 88 | F2_conv_d2sf | Rd32=convert_d2sf(Rss32) | 2 | 0.000% |
| 89 | F2_conv_df2w_chop | Rd32=convert_df2w(Rss32):chop | 4 | 0.000% |
| 90 | F2_conv_sf2df | Rdd32=convert_sf2df(Rs32) | 2 | 0.000% |
| 91 | F2_conv_w2df | Rdd32=convert_w2df(Rs32) | 4 | 0.000% |
| 92 | F2_conv_w2sf | Rd32=convert_w2sf(Rs32) | 4 | 0.000% |
| 93 | F2_dfadd | Rdd32=dfadd(Rss32,Rtt32) | 8 | 0.000% |
| 94 | F2_dfclass | Pd4=dfclass(Rss32,#u5) | 16 | 0.000% |
| 95 | F2_dfcmpeq | Pd4=dfcmp.eq(Rss32,Rtt32) | 84 | 0.002% |
| 96 | F2_dfcmpge | Pd4=dfcmp.ge(Rss32,Rtt32) | 8 | 0.000% |
| 97 | F2_dfcmpgt | Pd4=dfcmp.gt(Rss32,Rtt32) | 6 | 0.000% |
| 98 | F2_dfcmpuo | Pd4=dfcmp.uo(Rss32,Rtt32) | 98 | 0.003% |
| 99 | F2_dfsub | Rdd32=dfsub(Rss32,Rtt32) | 20 | 0.001% |
| 100 | F2_sffixupd | Rd32=sffixupd(Rs32,Rt32) | 4 | 0.000% |
| 101 | F2_sffixupn | Rd32=sffixupn(Rs32,Rt32) | 4 | 0.000% |
| 102 | F2_sffma_lib | Rx32+=sfmpy(Rs32,Rt32):lib | 16 | 0.000% |
| 103 | F2_sffma_sc | Rx32+=sfmpy(Rs32,Rt32,Pu4):scale | 4 | 0.000% |
| 104 | F2_sffms_lib | Rx32-=sfmpy(Rs32,Rt32):lib | 16 | 0.000% |
| 105 | F2_sfrecipa | Rd32,Pe4=sfrecipa(Rs32,Rt32) | 4 | 0.000% |
| 106 | J2_call | call #r22:2 | 20263 | 0.545% |
| 107 | J2_callr | callr Rs32 | 189 | 0.005% |
| 108 | J2_endloop0 | endloop0 | 361197 | 9.721% |
| 109 | J2_endloop1 | endloop1 | 810 | 0.022% |
| 110 | J2_jump | jump #r22:2 | 14926 | 0.402% |
| 111 | J2_jumpf | if (!Pu4) jump:nt #r15:2 | 1724 | 0.046% |
| 112 | J2_jumpfnew | if (!Pu4.new) jump:nt #r15:2 | 1046 | 0.028% |
| 113 | J2_jumpfnewpt | if (!Pu4.new) jump:t #r15:2 | 224 | 0.006% |
| 114 | J2_jumpr | jumpr Rs32 | 13059 | 0.351% |
| 115 | J2_jumprf | if (!Pu4) jumpr:nt Rs32 | 1509 | 0.041% |
| 116 | J2_jumprfnew | if (!Pu4.new) jumpr:nt Rs32 | 19 | 0.001% |
| 117 | J2_jumprt | if (Pu4) jumpr:nt Rs32 | 35 | 0.001% |
| 118 | J2_jumprzpt | if (Rs32!=#0) jump:t #r13:2 | 127 | 0.003% |
| 119 | J2_jumpt | if (Pu4) jump:nt #r15:2 | 7531 | 0.203% |
| 120 | J2_jumptnew | if (Pu4.new) jump:nt #r15:2 | 6442 | 0.173% |
| 121 | J2_jumptnewpt | if (Pu4.new) jump:t #r15:2 | 370 | 0.010% |
| 122 | J2_loop0i | loop0(#r7:2,#U10) | 3 | 0.000% |
| 123 | J2_loop0r | loop0(#r7:2,Rs32) | 12230 | 0.329% |
| 124 | J2_loop1r | loop1(#r7:2,Rs32) | 6 | 0.000% |
| 125 | J2_ploop1sr | p3=sp1loop0(#r7:2,Rs32) | 1270 | 0.034% |
| 126 | J2_rte | rte | 4343 | 0.117% |
| 127 | J2_trap0 | trap0(#u8) | 4334 | 0.117% |
| 128 | J4_cmpeq_f_jumpnv_t | if (!cmp.eq(Ns8.new,Rt32)) jump:t #r9:2 | 9 | 0.000% |
| 129 | J4_cmpeq_fp0_jump_t | p0=cmp.eq(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 40 | 0.001% |
| 130 | J4_cmpeq_t_jumpnv_nt | if (cmp.eq(Ns8.new,Rt32)) jump:nt #r9:2 | 321 | 0.009% |
| 131 | J4_cmpeq_t_jumpnv_t | if (cmp.eq(Ns8.new,Rt32)) jump:t #r9:2 | 14 | 0.000% |
| 132 | J4_cmpeq_tp0_jump_nt | p0=cmp.eq(Rs16,Rt16); if (p0.new) jump:nt #r9:2 | 3 | 0.000% |
| 133 | J4_cmpeq_tp0_jump_t | p0=cmp.eq(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 6 | 0.000% |
| 134 | J4_cmpeq_tp1_jump_nt | p1=cmp.eq(Rs16,Rt16); if (p1.new) jump:nt #r9:2 | 7 | 0.000% |
| 135 | J4_cmpeqi_f_jumpnv_nt | if (!cmp.eq(Ns8.new,#U5)) jump:nt #r9:2 | 11 | 0.000% |
| 136 | J4_cmpeqi_f_jumpnv_t | if (!cmp.eq(Ns8.new,#U5)) jump:t #r9:2 | 55 | 0.001% |
| 137 | J4_cmpeqi_fp0_jump_nt | p0=cmp.eq(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 24 | 0.001% |
| 138 | J4_cmpeqi_fp0_jump_t | p0=cmp.eq(Rs16,#U5); if (!p0.new) jump:t #r9:2 | 2847 | 0.077% |
| 139 | J4_cmpeqi_t_jumpnv_nt | if (cmp.eq(Ns8.new,#U5)) jump:nt #r9:2 | 386 | 0.010% |
| 140 | J4_cmpeqi_t_jumpnv_t | if (cmp.eq(Ns8.new,#U5)) jump:t #r9:2 | 6 | 0.000% |
| 141 | J4_cmpeqi_tp0_jump_nt | p0=cmp.eq(Rs16,#U5); if (p0.new) jump:nt #r9:2 | 15811 | 0.426% |
| 142 | J4_cmpeqi_tp0_jump_t | p0=cmp.eq(Rs16,#U5); if (p0.new) jump:t #r9:2 | 179 | 0.005% |
| 143 | J4_cmpeqi_tp1_jump_nt | p1=cmp.eq(Rs16,#U5); if (p1.new) jump:nt #r9:2 | 17 | 0.000% |
| 144 | J4_cmpeqn1_fp0_jump_t | p0=cmp.eq(Rs16,#-1); if (!p0.new) jump:t #r9:2 | 4328 | 0.116% |
| 145 | J4_cmpeqn1_t_jumpnv_nt | if (cmp.eq(Ns8.new,#-1)) jump:nt #r9:2 | 6 | 0.000% |
| 146 | J4_cmpgt_f_jumpnv_t | if (!cmp.gt(Ns8.new,Rt32)) jump:t #r9:2 | 26 | 0.001% |
| 147 | J4_cmpgt_fp0_jump_nt | p0=cmp.gt(Rs16,Rt16); if (!p0.new) jump:nt #r9:2 | 8 | 0.000% |
| 148 | J4_cmpgt_fp0_jump_t | p0=cmp.gt(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 37 | 0.001% |
| 149 | J4_cmpgt_tp0_jump_t | p0=cmp.gt(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 6 | 0.000% |
| 150 | J4_cmpgti_f_jumpnv_nt | if (!cmp.gt(Ns8.new,#U5)) jump:nt #r9:2 | 248 | 0.007% |
| 151 | J4_cmpgti_f_jumpnv_t | if (!cmp.gt(Ns8.new,#U5)) jump:t #r9:2 | 202 | 0.005% |
| 152 | J4_cmpgti_fp0_jump_nt | p0=cmp.gt(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 92 | 0.002% |
| 153 | J4_cmpgti_fp0_jump_t | p0=cmp.gt(Rs16,#U5); if (!p0.new) jump:t #r9:2 | 4 | 0.000% |
| 154 | J4_cmpgti_t_jumpnv_t | if (cmp.gt(Ns8.new,#U5)) jump:t #r9:2 | 2 | 0.000% |
| 155 | J4_cmpgti_tp0_jump_nt | p0=cmp.gt(Rs16,#U5); if (p0.new) jump:nt #r9:2 | 29 | 0.001% |
| 156 | J4_cmpgti_tp0_jump_t | p0=cmp.gt(Rs16,#U5); if (p0.new) jump:t #r9:2 | 228 | 0.006% |
| 157 | J4_cmpgti_tp1_jump_t | p1=cmp.gt(Rs16,#U5); if (p1.new) jump:t #r9:2 | 2 | 0.000% |
| 158 | J4_cmpgtn1_f_jumpnv_nt | if (!cmp.gt(Ns8.new,#-1)) jump:nt #r9:2 | 15 | 0.000% |
| 159 | J4_cmpgtn1_fp0_jump_nt | p0=cmp.gt(Rs16,#-1); if (!p0.new) jump:nt #r9:2 | 14 | 0.000% |
| 160 | J4_cmpgtn1_fp0_jump_t | p0=cmp.gt(Rs16,#-1); if (!p0.new) jump:t #r9:2 | 2 | 0.000% |
| 161 | J4_cmpgtn1_tp0_jump_t | p0=cmp.gt(Rs16,#-1); if (p0.new) jump:t #r9:2 | 14 | 0.000% |
| 162 | J4_cmpgtu_f_jumpnv_t | if (!cmp.gtu(Ns8.new,Rt32)) jump:t #r9:2 | 11 | 0.000% |
| 163 | J4_cmpgtu_fp0_jump_nt | p0=cmp.gtu(Rs16,Rt16); if (!p0.new) jump:nt #r9:2 | 1 | 0.000% |
| 164 | J4_cmpgtu_fp0_jump_t | p0=cmp.gtu(Rs16,Rt16); if (!p0.new) jump:t #r9:2 | 2 | 0.000% |
| 165 | J4_cmpgtu_t_jumpnv_t | if (cmp.gtu(Ns8.new,Rt32)) jump:t #r9:2 | 176 | 0.005% |
| 166 | J4_cmpgtu_tp0_jump_nt | p0=cmp.gtu(Rs16,Rt16); if (p0.new) jump:nt #r9:2 | 11 | 0.000% |
| 167 | J4_cmpgtu_tp0_jump_t | p0=cmp.gtu(Rs16,Rt16); if (p0.new) jump:t #r9:2 | 9 | 0.000% |
| 168 | J4_cmpgtu_tp1_jump_t | p1=cmp.gtu(Rs16,Rt16); if (p1.new) jump:t #r9:2 | 6 | 0.000% |
| 169 | J4_cmpgtui_fp0_jump_nt | p0=cmp.gtu(Rs16,#U5); if (!p0.new) jump:nt #r9:2 | 9 | 0.000% |
| 170 | J4_cmpgtui_tp0_jump_t | p0=cmp.gtu(Rs16,#U5); if (p0.new) jump:t #r9:2 | 29 | 0.001% |
| 171 | J4_cmplt_f_jumpnv_nt | if (!cmp.gt(Rt32,Ns8.new)) jump:nt #r9:2 | 22 | 0.001% |
| 172 | J4_cmplt_f_jumpnv_t | if (!cmp.gt(Rt32,Ns8.new)) jump:t #r9:2 | 2 | 0.000% |
| 173 | J4_cmplt_t_jumpnv_t | if (cmp.gt(Rt32,Ns8.new)) jump:t #r9:2 | 20 | 0.001% |
| 174 | J4_cmpltu_f_jumpnv_nt | if (!cmp.gtu(Rt32,Ns8.new)) jump:nt #r9:2 | 7 | 0.000% |
| 175 | J4_cmpltu_f_jumpnv_t | if (!cmp.gtu(Rt32,Ns8.new)) jump:t #r9:2 | 371 | 0.010% |
| 176 | J4_cmpltu_t_jumpnv_t | if (cmp.gtu(Rt32,Ns8.new)) jump:t #r9:2 | 13 | 0.000% |
| 177 | J4_jumpseti | Rd16=#U6 ; jump #r9:2 | 35 | 0.001% |
| 178 | J4_jumpsetr | Rd16=Rs16 ; jump #r9:2 | 26 | 0.001% |
| 179 | J4_tstbit0_fp0_jump_t | p0=tstbit(Rs16,#0); if (!p0.new) jump:t #r9:2 | 14 | 0.000% |
| 180 | L2_deallocframe | Rdd32=deallocframe(Rs32):raw | 178 | 0.005% |
| 181 | L2_loadrb_io | Rd32=memb(Rs32+#s11:0) | 9007 | 0.242% |
| 182 | L2_loadrb_pi | Rd32=memb(Rx32++#s4:0) | 347 | 0.009% |
| 183 | L2_loadrd_io | Rdd32=memd(Rs32+#s11:3) | 427 | 0.011% |
| 184 | L2_loadrd_pi | Rdd32=memd(Rx32++#s4:3) | 8652 | 0.233% |
| 185 | L2_loadrdgp | Rdd32=memd(gp+#u16:3) | 6 | 0.000% |
| 186 | L2_loadrh_io | Rd32=memh(Rs32+#s11:1) | 77 | 0.002% |
| 187 | L2_loadri_io | Rd32=memw(Rs32+#s11:2) | 42789 | 1.152% |
| 188 | L2_loadrigp | Rd32=memw(gp+#u16:2) | 7046 | 0.190% |
| 189 | L2_loadrub_io | Rd32=memub(Rs32+#s11:0) | 604 | 0.016% |
| 190 | L2_loadrub_pi | Rd32=memub(Rx32++#s4:0) | 73 | 0.002% |
| 191 | L2_loadrubgp | Rd32=memub(gp+#u16:0) | 220 | 0.006% |
| 192 | L2_loadruh_io | Rd32=memuh(Rs32+#s11:1) | 387 | 0.010% |
| 193 | L2_loadw_locked | Rd32=memw_locked(Rs32) | 649 | 0.017% |
| 194 | L2_ploadrbfnew_io | if (!Pt4.new) Rd32=memb(Rs32+#u6:0) | 174 | 0.005% |
| 195 | L2_ploadrbtnew_io | if (Pt4.new) Rd32=memb(Rs32+#u6:0) | 8 | 0.000% |
| 196 | L2_ploadrdf_io | if (!Pt4) Rdd32=memd(Rs32+#u6:3) | 3 | 0.000% |
| 197 | L2_ploadrdfnew_io | if (!Pt4.new) Rdd32=memd(Rs32+#u6:3) | 34 | 0.001% |
| 198 | L2_ploadrdtnew_io | if (Pt4.new) Rdd32=memd(Rs32+#u6:3) | 8 | 0.000% |
| 199 | L2_ploadrhfnew_io | if (!Pt4.new) Rd32=memh(Rs32+#u6:1) | 28 | 0.001% |
| 200 | L2_ploadrhtnew_io | if (Pt4.new) Rd32=memh(Rs32+#u6:1) | 6 | 0.000% |
| 201 | L2_ploadrif_io | if (!Pt4) Rd32=memw(Rs32+#u6:2) | 5 | 0.000% |
| 202 | L2_ploadrifnew_io | if (!Pt4.new) Rd32=memw(Rs32+#u6:2) | 20 | 0.001% |
| 203 | L2_ploadrit_io | if (Pt4) Rd32=memw(Rs32+#u6:2) | 4860 | 0.131% |
| 204 | L2_ploadritnew_io | if (Pt4.new) Rd32=memw(Rs32+#u6:2) | 870 | 0.023% |
| 205 | L2_ploadrubfnew_io | if (!Pt4.new) Rd32=memub(Rs32+#u6:0) | 5 | 0.000% |
| 206 | L2_ploadrubfnew_pi | if (!Pt4.new) Rd32=memub(Rx32++#s4:0) | 10 | 0.000% |
| 207 | L2_ploadruhfnew_io | if (!Pt4.new) Rd32=memuh(Rs32+#u6:1) | 15 | 0.000% |
| 208 | L2_ploadruhtnew_io | if (Pt4.new) Rd32=memuh(Rs32+#u6:1) | 8 | 0.000% |
| 209 | L4_add_memopw_io | memw(Rs32+#u6:2)+=Rt32 | 175 | 0.005% |
| 210 | L4_ior_memoph_io | memh(Rs32+#u6:1)=setbit(#U5) | 1 | 0.000% |
| 211 | L4_loadrd_rr | Rdd32=memd(Rs32+Rt32<<#u2) | 29 | 0.001% |
| 212 | L4_loadrh_rr | Rd32=memh(Rs32+Rt32<<#u2) | 10 | 0.000% |
| 213 | L4_loadri_ap | Rd32=memw(Re32=#U6) | 3 | 0.000% |
| 214 | L4_loadri_rr | Rd32=memw(Rs32+Rt32<<#u2) | 6 | 0.000% |
| 215 | L4_loadri_ur | Rd32=memw(Rt32<<#u2+#U6) | 34 | 0.001% |
| 216 | L4_loadrub_rr | Rd32=memub(Rs32+Rt32<<#u2) | 52 | 0.001% |
| 217 | L4_loadruh_rr | Rd32=memuh(Rs32+Rt32<<#u2) | 22 | 0.001% |
| 218 | L4_loadruh_ur | Rd32=memuh(Rt32<<#u2+#U6) | 22 | 0.001% |
| 219 | L4_loadw_phys | Rd32=memw_phys(Rs32,Rt32) | 16 | 0.000% |
| 220 | L4_or_memoph_io | memh(Rs32+#u6:1)|=Rt32 | 5 | 0.000% |
| 221 | L4_ploadrbtnew_rr | if (Pv4.new) Rd32=memb(Rs32+Rt32<<#u2) | 2 | 0.000% |
| 222 | L4_ploadrdf_rr | if (!Pv4) Rdd32=memd(Rs32+Rt32<<#u2) | 2 | 0.000% |
| 223 | L4_ploadrdfnew_rr | if (!Pv4.new) Rdd32=memd(Rs32+Rt32<<#u2) | 12 | 0.000% |
| 224 | L4_ploadritnew_abs | if (Pt4.new) Rd32=memw(#u6) | 4 | 0.000% |
| 225 | L4_ploadrubfnew_abs | if (!Pt4.new) Rd32=memub(#u6) | 182 | 0.005% |
| 226 | L4_return | Rdd32=dealloc_return(Rs32):raw | 1401 | 0.038% |
| 227 | M2_acci | Rx32+=add(Rs32,Rt32) | 2198 | 0.059% |
| 228 | M2_dpmpyuu_acc_s0 | Rxx32+=mpyu(Rs32,Rt32) | 8 | 0.000% |
| 229 | M2_dpmpyuu_s0 | Rdd32=mpyu(Rs32,Rt32) | 50 | 0.001% |
| 230 | M2_maci | Rx32+=mpyi(Rs32,Rt32) | 1120 | 0.030% |
| 231 | M2_mnaci | Rx32-=mpyi(Rs32,Rt32) | 26 | 0.001% |
| 232 | M2_mpyi | Rd32=mpyi(Rs32,Rt32) | 2401 | 0.065% |
| 233 | M2_mpysip | Rd32=+mpyi(Rs32,#u8) | 12 | 0.000% |
| 234 | S2_addasl_rrri | Rd32=addasl(Rt32,Rs32,#u3) | 895 | 0.024% |
| 235 | S2_allocframe | allocframe(Rx32,#u11:3):raw | 2391 | 0.064% |
| 236 | S2_asl_i_p | Rdd32=asl(Rss32,#u6) | 8 | 0.000% |
| 237 | S2_asl_i_r | Rd32=asl(Rs32,#u5) | 321 | 0.009% |
| 238 | S2_asl_i_r_acc | Rx32+=asl(Rs32,#u5) | 8 | 0.000% |
| 239 | S2_asl_i_r_nac | Rx32-=asl(Rs32,#u5) | 4318 | 0.116% |
| 240 | S2_asl_i_r_or | Rx32|=asl(Rs32,#u5) | 33 | 0.001% |
| 241 | S2_asr_i_p | Rdd32=asr(Rss32,#u6) | 2 | 0.000% |
| 242 | S2_asr_i_r | Rd32=asr(Rs32,#u5) | 256 | 0.007% |
| 243 | S2_asr_i_r_and | Rx32&=asr(Rs32,#u5) | 2 | 0.000% |
| 244 | S2_asr_i_r_nac | Rx32-=asr(Rs32,#u5) | 2 | 0.000% |
| 245 | S2_cl0 | Rd32=cl0(Rs32) | 402 | 0.011% |
| 246 | S2_cl0p | Rd32=cl0(Rss32) | 72 | 0.002% |
| 247 | S2_clrbit_i | Rd32=clrbit(Rs32,#u5) | 27 | 0.001% |
| 248 | S2_ct0 | Rd32=ct0(Rs32) | 3 | 0.000% |
| 249 | S2_extractu | Rd32=extractu(Rs32,#u5,#U5) | 370 | 0.010% |
| 250 | S2_insert | Rx32=insert(Rs32,#u5,#U5) | 22 | 0.001% |
| 251 | S2_insertp | Rxx32=insert(Rss32,#u6,#U6) | 16 | 0.000% |
| 252 | S2_lsl_r_p | Rdd32=lsl(Rss32,Rt32) | 72 | 0.002% |
| 253 | S2_lsl_r_vw | Rdd32=vlslw(Rss32,Rt32) | 196 | 0.005% |
| 254 | S2_lsr_i_p | Rdd32=lsr(Rss32,#u6) | 522 | 0.014% |
| 255 | S2_lsr_i_p_acc | Rxx32+=lsr(Rss32,#u6) | 16 | 0.000% |
| 256 | S2_lsr_i_r | Rd32=lsr(Rs32,#u5) | 12132 | 0.327% |
| 257 | S2_lsr_i_vw | Rdd32=vlsrw(Rss32,#u5) | 394 | 0.011% |
| 258 | S2_pstorerbt_pi | if (Pv4) memb(Rx32++#s4:0)=Rt32 | 255 | 0.007% |
| 259 | S2_pstorerbtnew_pi | if (Pv4.new) memb(Rx32++#s4:0)=Rt32 | 1 | 0.000% |
| 260 | S2_pstorerdt_io | if (Pv4) memd(Rs32+#u6:3)=Rtt32 | 2 | 0.000% |
| 261 | S2_pstorerdt_pi | if (Pv4) memd(Rx32++#s4:3)=Rtt32 | 8652 | 0.233% |
| 262 | S2_pstorerdtnew_pi | if (Pv4.new) memd(Rx32++#s4:3)=Rtt32 | 8 | 0.000% |
| 263 | S2_pstorerhnewt_io | if (Pv4) memh(Rs32+#u6:1)=Nt8.new | 20 | 0.001% |
| 264 | S2_pstorerht_io | if (Pv4) memh(Rs32+#u6:1)=Rt32 | 2 | 0.000% |
| 265 | S2_pstorerhtnew_pi | if (Pv4.new) memh(Rx32++#s4:1)=Rt32 | 2 | 0.000% |
| 266 | S2_pstorerif_io | if (!Pv4) memw(Rs32+#u6:2)=Rt32 | 2 | 0.000% |
| 267 | S2_pstoreritnew_pi | if (Pv4.new) memw(Rx32++#s4:2)=Rt32 | 2 | 0.000% |
| 268 | S2_setbit_i | Rd32=setbit(Rs32,#u5) | 24 | 0.001% |
| 269 | S2_storerb_io | memb(Rs32+#s11:0)=Rt32 | 212 | 0.006% |
| 270 | S2_storerb_pi | memb(Rx32++#s4:0)=Rt32 | 12 | 0.000% |
| 271 | S2_storerbgp | memb(gp+#u16:0)=Rt32 | 2 | 0.000% |
| 272 | S2_storerbnew_io | memb(Rs32+#s11:0)=Nt8.new | 34 | 0.001% |
| 273 | S2_storerbnew_pi | memb(Rx32++#s4:0)=Nt8.new | 75 | 0.002% |
| 274 | S2_storerd_io | memd(Rs32+#s11:3)=Rtt32 | 13949 | 0.375% |
| 275 | S2_storerh_io | memh(Rs32+#s11:1)=Rt32 | 80 | 0.002% |
| 276 | S2_storerhnew_io | memh(Rs32+#s11:1)=Nt8.new | 117 | 0.003% |
| 277 | S2_storeri_io | memw(Rs32+#s11:2)=Rt32 | 10229 | 0.275% |
| 278 | S2_storerigp | memw(gp+#u16:2)=Rt32 | 11 | 0.000% |
| 279 | S2_storerinew_io | memw(Rs32+#s11:2)=Nt8.new | 6020 | 0.162% |
| 280 | S2_storerinewgp | memw(gp+#u16:2)=Nt8.new | 5 | 0.000% |
| 281 | S2_storew_locked | memw_locked(Rs32,Pd4)=Rt32 | 449 | 0.012% |
| 282 | S2_togglebit_i | Rd32=togglebit(Rs32,#u5) | 6 | 0.000% |
| 283 | S2_tstbit_i | Pd4=tstbit(Rs32,#u5) | 87 | 0.002% |
| 284 | S2_vsplatrb | Rd32=vsplatb(Rs32) | 4 | 0.000% |
| 285 | S4_addaddi | Rd32=add(Rs32,add(Ru32,#s6)) | 10824 | 0.291% |
| 286 | S4_addi_asl_ri | Rx32=add(#u8,asl(Rx32,#U5)) | 402 | 0.011% |
| 287 | S4_ntstbit_i | Pd4=!tstbit(Rs32,#u5) | 373 | 0.010% |
| 288 | S4_or_andix | Rx32=or(Ru32,and(Rx32,#s10)) | 62 | 0.002% |
| 289 | S4_pstorerbt_rr | if (Pv4) memb(Rs32+Ru32<<#u2)=Rt32 | 2 | 0.000% |
| 290 | S4_pstorerbtnew_io | if (Pv4.new) memb(Rs32+#u6:0)=Rt32 | 12 | 0.000% |
| 291 | S4_pstorerdtnew_io | if (Pv4.new) memd(Rs32+#u6:3)=Rtt32 | 4 | 0.000% |
| 292 | S4_pstorerifnew_io | if (!Pv4.new) memw(Rs32+#u6:2)=Rt32 | 178 | 0.005% |
| 293 | S4_pstorerifnew_rr | if (!Pv4.new) memw(Rs32+Ru32<<#u2)=Rt32 | 6 | 0.000% |
| 294 | S4_pstorerinewtnew_io | if (Pv4.new) memw(Rs32+#u6:2)=Nt8.new | 2 | 0.000% |
| 295 | S4_pstoreritnew_io | if (Pv4.new) memw(Rs32+#u6:2)=Rt32 | 2 | 0.000% |
| 296 | S4_storeirb_io | memb(Rs32+#u6:0)=#S8 | 8 | 0.000% |
| 297 | S4_storeirh_io | memh(Rs32+#u6:1)=#S8 | 200 | 0.005% |
| 298 | S4_storeirhtnew_io | if (Pv4.new) memh(Rs32+#u6:1)=#S6 | 23 | 0.001% |
| 299 | S4_storeiri_io | memw(Rs32+#u6:2)=#S8 | 285 | 0.008% |
| 300 | S4_storeirif_io | if (!Pv4) memw(Rs32+#u6:2)=#S6 | 2 | 0.000% |
| 301 | S4_storeirifnew_io | if (!Pv4.new) memw(Rs32+#u6:2)=#S6 | 14 | 0.000% |
| 302 | S4_storeiritnew_io | if (Pv4.new) memw(Rs32+#u6:2)=#S6 | 522 | 0.014% |
| 303 | S4_storerb_rr | memb(Rs32+Ru32<<#u2)=Rt32 | 2 | 0.000% |
| 304 | S4_storerd_rr | memd(Rs32+Ru32<<#u2)=Rtt32 | 16 | 0.000% |
| 305 | S4_storerhnew_rr | memh(Rs32+Ru32<<#u2)=Nt8.new | 20 | 0.001% |
| 306 | S4_storeri_rr | memw(Rs32+Ru32<<#u2)=Rt32 | 7 | 0.000% |
| 307 | S4_subaddi | Rd32=add(Rs32,sub(#s6,Ru32)) | 9 | 0.000% |
| 308 | SA1_addi | Rx16=add(Rx16,#s7) | 28 | 0.001% |
| 309 | SA1_addsp | Rd16=add(r29,#u6:2) | 5698 | 0.153% |
| 310 | SA1_clrtnew | if (p0.new) Rd16=#0 | 1088 | 0.029% |
| 311 | SA1_cmpeqi | p0=cmp.eq(Rs16,#u2) | 2520 | 0.068% |
| 312 | SA1_combine0i | Rdd8=combine(#0,#u2) | 3 | 0.000% |
| 313 | SA1_combinezr | Rdd8=combine(#0,Rs16) | 180 | 0.005% |
| 314 | SA1_dec | Rd16=add(Rs16,#-1) | 114 | 0.003% |
| 315 | SA1_inc | Rd16=add(Rs16,#1) | 162 | 0.004% |
| 316 | SA1_seti | Rd16=#u6 | 569 | 0.015% |
| 317 | SA1_sxth | Rd16=sxth(Rs16) | 42 | 0.001% |
| 318 | SA1_tfr | Rd16=Rs16 | 3254 | 0.088% |
| 319 | SL1_loadri_io | Rd16=memw(Rs16+#u4:2) | 860 | 0.023% |
| 320 | SL1_loadrub_io | Rd16=memub(Rs16+#u4:0) | 208 | 0.006% |
| 321 | SL2_deallocframe | deallocframe | 215 | 0.006% |
| 322 | SL2_jumpr31 | jumpr r31 | 41 | 0.001% |
| 323 | SL2_jumpr31_t | if (p0) jumpr r31 | 197 | 0.005% |
| 324 | SL2_jumpr31_tnew | if (p0.new) jumpr:nt r31 | 2346 | 0.063% |
| 325 | SL2_loadrb_io | Rd16=memb(Rs16+#u3:0) | 40 | 0.001% |
| 326 | SL2_loadrd_sp | Rdd8=memd(r29+#u5:3) | 20443 | 0.550% |
| 327 | SL2_loadri_sp | Rd16=memw(r29+#u5:2) | 351 | 0.009% |
| 328 | SL2_loadruh_io | Rd16=memuh(Rs16+#u3:1) | 217 | 0.006% |
| 329 | SL2_return | dealloc_return | 5943 | 0.160% |
| 330 | SS1_storew_io | memw(Rs16+#u4:2)=Rt16 | 51 | 0.001% |
| 331 | SS2_allocframe | allocframe(#u5:3) | 5349 | 0.144% |
| 332 | SS2_storebi0 | memb(Rs16+#u4:0)=#0 | 6 | 0.000% |
| 333 | SS2_stored_sp | memd(r29+#s6:3)=Rtt8 | 8116 | 0.218% |
| 334 | SS2_storew_sp | memw(r29+#u5:2)=Rt16 | 6518 | 0.175% |
| 335 | SS2_storewi0 | memw(Rs16+#u4:2)=#0 | 5 | 0.000% |
| 336 | V6_lvsplatw | Vd32=vsplat(Rt32) | 18 | 0.000% |
| 337 | V6_vL32Ub_pi | Vd32=vmemu(Rx32++#s3) | 218700 | 5.886% |
| 338 | V6_vL32b_ai | Vd32=vmem(Rt32+#s4) | 36450 | 0.981% |
| 339 | V6_vL32b_cur_ai | Vd32.cur=vmem(Rt32+#s4) | 18225 | 0.490% |
| 340 | V6_vS32Ub_pi | vmemu(Rx32++#s3)=Vs32 | 72901 | 1.962% |
| 341 | V6_vS32b_new_ai | vmem(Rt32+#s4)=Os8.new | 6 | 0.000% |
| 342 | V6_vabsdiffuh | Vd32.uh=vabsdiff(Vu32.uh,Vv32.uh) | 291600 | 7.848% |
| 343 | V6_vaddh_dv | Vdd32.h=vadd(Vuu32.h,Vvv32.h) | 218700 | 5.886% |
| 344 | V6_vassign | Vd32=Vu32 | 218700 | 5.886% |
| 345 | V6_vcombine | Vdd32=vcombine(Vu32,Vv32) | 90 | 0.002% |
| 346 | V6_vminuh | Vd32.uh=vmin(Vu32.uh,Vv32.uh) | 145800 | 3.924% |
| 347 | V6_vmpabus | Vdd32.h=vmpa(Vuu32.ub,Rt32.b) | 145800 | 3.924% |
| 348 | V6_vshuffeb | Vd32.b=vshuffe(Vu32.b,Vv32.b) | 72900 | 1.962% |
| 349 | V6_vtmpybus | Vdd32.h=vtmpy(Vuu32.ub,Rt32.b) | 109350 | 2.943% |
| 350 | V6_vzb | Vdd32.uh=vzxt(Vu32.ub) | 145800 | 3.924% |
| 351 | Y2_crswap0 | crswap(Rx32,sgp0) | 8686 | 0.234% |
| 352 | Y2_cswi | cswi(Rs32) | 1 | 0.000% |
| 353 | Y2_dccleaninva | dccleaninva(Rs32) | 211508 | 5.692% |
| 354 | Y2_dcfetchbo | dcfetch(Rs32+#u11:3) | 1270 | 0.034% |
| 355 | Y2_dcinva | dcinva(Rs32) | 64800 | 1.744% |
| 356 | Y2_dczeroa | dczeroa(Rs32) | 65440 | 1.761% |
| 357 | Y2_isync | isync | 164 | 0.004% |
| 358 | Y2_l2kill | l2kill | 1 | 0.000% |
| 359 | Y2_syncht | syncht | 2 | 0.000% |
| 360 | Y2_tfrscrr | Rd32=Ss64 | 539 | 0.015% |
| 361 | Y2_tfrsrcr | Sd64=Rs32 | 37 | 0.001% |
| 362 | Y2_tlbp | Rd32=tlbp(Rs32) | 5 | 0.000% |
| 363 | Y2_tlbw | tlbw(Rss32,Rt32) | 133 | 0.004% |
| Total Count | 3715623 | 100% | ||
NOTE: On each clock tick, the activity on every thread is accumulated. Assume for example a 1GHz core with 4 threads that simulates for 1 second but only 1 thread is active and the others are in WAIT mode. Further assume the active thread spends half its time committing packets and half the time stalled. In this case the cycles*4 will be 4 billion, WAIT_CYCLES will be 3 billion, commits will be 500 million, and the total of all other stall types will be 500 million.
Simulation Settings:| Profile data version: | 2.5 |
| cache_config: | L1-I$ = 16 Kb, L1-D$ = 16 Kb, L2-$ = 512 Kb |
| command_line: | --magic_angel --quiet --info_quiet --revid 0x4066 --axibusratio 2 --axibuspenalty 75 --ahbbusratio 2 --ahbbuspenalty 75 --axi2busratio 2 --axi2buspenalty 75 --l2tcm_base 0xd8000000 --timing --clade2_stable_assert_hack 0x0 |
| core: | V66A_512 |
| q6version: | v66 |
| revid: | 4066 |